TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 13462

Computational Models of Language Meaning in Context

( Nov 10 – Nov 15, 2013 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/13462

Organizers

Contact


Schedule

Motivation

The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presupposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models - hopefully to be overcome by future research -, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics.

The following themes will be the focus of this seminar:

  1. The problems in conventional semantic models that distributional semantics claims to be able to solve;
  2. The promise of distributional semantics linking to multimodal representations
  3. The current limitations of distributional semantics theories to account for linguistic compositionality;
  4. The absence of any robust first-order models of inference for distributional semantics;
  5. The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.

Summary

The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presuppposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models -- hopefully to be overcome by future research --, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics. To this end, there were five themes addressed by the participants:

  • The problems in conventional semantic models that distributional semantics claims to be able to solve;
  • The promise of distributional semantics linking to multimodal representations
  • The current limitations of distributional semantics theories to account for linguistic compositionality;
  • The absence of any robust first-order models of inference for distributional semantics;
  • The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.
Copyright Hans Kamp, Alessandro Lenci, and James Pustejovsky

Participants
  • Nicholas Asher (Paul Sabatier University - Toulouse, FR) [dblp]
  • Marco Baroni (University of Trento, IT) [dblp]
  • Peter A. Cariani (Harvard Medical School - Newton, US) [dblp]
  • Stephen Clark (University of Cambridge, GB) [dblp]
  • Ann Copestake (University of Cambridge, GB) [dblp]
  • Ido Dagan (Bar-Ilan University - Ramat Gan, IL) [dblp]
  • Katrin Erk (University of Texas - Austin, US) [dblp]
  • Stefan Evert (Universität Erlangen-Nürnberg, DE) [dblp]
  • Patrick W. Hanks (University of Wolverhampton, GB) [dblp]
  • Graeme Hirst (University of Toronto, CA) [dblp]
  • Jerry R. Hobbs (USC - Marina del Rey, US) [dblp]
  • Hans Kamp (Universität Stuttgart, DE) [dblp]
  • Lauri Karttunen (Stanford University, US) [dblp]
  • Alessandro Lenci (University of Pisa, IT) [dblp]
  • Sebastian Löbner (Heinrich-Heine-Universität Düsseldorf, DE) [dblp]
  • Louise McNally (UPF - Barcelona, ES) [dblp]
  • Sebastian Padó (Universität Stuttgart, DE) [dblp]
  • Massimo Poesio (University of Essex, GB) [dblp]
  • James Pustejovsky (Brandeis University - Waltham, US) [dblp]
  • Anna Rumshisky (University of Massachusetts - Lowell, US) [dblp]
  • Hinrich Schütze (LMU München, DE) [dblp]
  • Mark Steedman (University of Edinburgh, GB) [dblp]
  • Suzanne Stevenson (University of Toronto, CA) [dblp]
  • Tim van de Cruys (Paul Sabatier University - Toulouse, FR) [dblp]
  • Jan van Eijck (CWI - Amsterdam, NL) [dblp]
  • Dominic Widdows (Microsoft Bing - Bellevue, US) [dblp]
  • Annie Zaenen (Stanford University, US) [dblp]
  • Alessandra Zarcone (Universität Stuttgart, DE) [dblp]

Classification
  • artificial intelligence / robotics
  • semantics / formal methods
  • soft computing / evolutionary algorithms

Keywords
  • compositionality
  • distributional semantics
  • statistical inference
  • logical inference