http://www.dagstuhl.de/13462

November 10 – 15 , 2013, Dagstuhl Seminar 13462

Computational Models of Language Meaning in Context

Organizers

Hans Kamp (Universität Stuttgart, DE)
Alessandro Lenci (University of Pisa, IT)
James Pustejovsky (Brandeis University – Waltham, US)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 3, Issue 11 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents
Dagstuhl Seminar Schedule [pdf]

Summary

The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presuppposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models -- hopefully to be overcome by future research --, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics. To this end, there were five themes addressed by the participants:

  • The problems in conventional semantic models that distributional semantics claims to be able to solve;
  • The promise of distributional semantics linking to multimodal representations
  • The current limitations of distributional semantics theories to account for linguistic compositionality;
  • The absence of any robust first-order models of inference for distributional semantics;
  • The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.
License
  Creative Commons BY 3.0 Unported license
  Hans Kamp, Alessandro Lenci, and James Pustejovsky

Classification

  • Artificial Intelligence / Robotics
  • Semantics / Formal Methods
  • Soft Computing / Evolutionary Algorithms

Keywords

  • Compositionality
  • Distributional semantics
  • Statistical inference
  • Logical inference

Book exhibition

Books from the participants of the current Seminar 

Book exhibition in the library, ground floor, during the seminar week.

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support