TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 13462

Computational Models of Language Meaning in Context

( 10. Nov – 15. Nov, 2013 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/13462

Organisatoren

Kontakt


Programm

Motivation

The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presupposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models - hopefully to be overcome by future research -, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics.

The following themes will be the focus of this seminar:

  1. The problems in conventional semantic models that distributional semantics claims to be able to solve;
  2. The promise of distributional semantics linking to multimodal representations
  3. The current limitations of distributional semantics theories to account for linguistic compositionality;
  4. The absence of any robust first-order models of inference for distributional semantics;
  5. The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.

Summary

The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presuppposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models -- hopefully to be overcome by future research --, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics. To this end, there were five themes addressed by the participants:

  • The problems in conventional semantic models that distributional semantics claims to be able to solve;
  • The promise of distributional semantics linking to multimodal representations
  • The current limitations of distributional semantics theories to account for linguistic compositionality;
  • The absence of any robust first-order models of inference for distributional semantics;
  • The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.
Copyright Hans Kamp, Alessandro Lenci, and James Pustejovsky

Teilnehmer
  • Nicholas Asher (Paul Sabatier University - Toulouse, FR) [dblp]
  • Marco Baroni (University of Trento, IT) [dblp]
  • Peter A. Cariani (Harvard Medical School - Newton, US) [dblp]
  • Stephen Clark (University of Cambridge, GB) [dblp]
  • Ann Copestake (University of Cambridge, GB) [dblp]
  • Ido Dagan (Bar-Ilan University - Ramat Gan, IL) [dblp]
  • Katrin Erk (University of Texas - Austin, US) [dblp]
  • Stefan Evert (Universität Erlangen-Nürnberg, DE) [dblp]
  • Patrick W. Hanks (University of Wolverhampton, GB) [dblp]
  • Graeme Hirst (University of Toronto, CA) [dblp]
  • Jerry R. Hobbs (USC - Marina del Rey, US) [dblp]
  • Hans Kamp (Universität Stuttgart, DE) [dblp]
  • Lauri Karttunen (Stanford University, US) [dblp]
  • Alessandro Lenci (University of Pisa, IT) [dblp]
  • Sebastian Löbner (Heinrich-Heine-Universität Düsseldorf, DE) [dblp]
  • Louise McNally (UPF - Barcelona, ES) [dblp]
  • Sebastian Padó (Universität Stuttgart, DE) [dblp]
  • Massimo Poesio (University of Essex, GB) [dblp]
  • James Pustejovsky (Brandeis University - Waltham, US) [dblp]
  • Anna Rumshisky (University of Massachusetts - Lowell, US) [dblp]
  • Hinrich Schütze (LMU München, DE) [dblp]
  • Mark Steedman (University of Edinburgh, GB) [dblp]
  • Suzanne Stevenson (University of Toronto, CA) [dblp]
  • Tim van de Cruys (Paul Sabatier University - Toulouse, FR) [dblp]
  • Jan van Eijck (CWI - Amsterdam, NL) [dblp]
  • Dominic Widdows (Microsoft Bing - Bellevue, US) [dblp]
  • Annie Zaenen (Stanford University, US) [dblp]
  • Alessandra Zarcone (Universität Stuttgart, DE) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • semantics / formal methods
  • soft computing / evolutionary algorithms

Schlagworte
  • compositionality
  • distributional semantics
  • statistical inference
  • logical inference