10.11.13 - 15.11.13, Seminar 13462

Computational Models of Language Meaning in Context

The following text appeared on our web pages prior to the seminar, and was included as part of the invitation.


The term distributional semantics qualifies a rich family of computational methods sharing the assumption that the statistical distribution of words in context plays a key role in characterizing their semantic behavior. Distributional semantic models, such as LSA, HAL, etc., represent the meaning of a content word in terms of a distributed vector recording its pattern of co-occurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors. Distributional semantic models provide a quantitative correlate to the notion of semantic similarity, and are able to address various lexical semantic tasks, such as synonym identification, semantic classification, selectional preference modeling, and so forth.

Distributional semantics has become increasingly popular in Natural Language Processing. Its attractiveness lies in the fact that distributional representations do not require manual supervision and reduce the a priori stipulations in semantic modeling. Moreover, distributional models generally outperform other types of formal lexical representations, such as for instance semantic networks. Many researchers have also strongly argued for the psychological validity of distributional semantic representations. Corpus-derived measures of semantic similarity have been assessed in a variety of psychological tasks ranging from similarity judgments to simulations of semantic and associative priming, showing a high correlation with human behavioral data.

Despite its successes, no single distributional semantic model meets all requirements posed by formal semantics or linguistic theory, nor do they cater for all aspects of meaning that are important to philosophers or cognitive scientists. In fact, the distributional paradigm raises the question of the extent to which semantic properties can be reduced to combinatorial relations. Many central aspects of natural language semantics are left out of the picture in distributional semantics, such as predication, compositionality, lexical inferences, quantification and anaphora, just to quote a few. A central question about distributional models is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discourses -- the traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition and associated semantic entailments represents a crucial condition for distributional model to provide a more general model of meaning. Conversely, we may wonder whether distributional representations can help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, pragmatically-induced reference and presupposition.

The main question is whether the current limits of distributional semantics represent contingent shortcomings of existing models - hopefully to be overcome by future research -, or instead they point to intrinsic inadequacies of vector-based representations to address key aspects of natural language semantics.

The following themes will be the focus of this seminar:

  1. The problems in conventional semantic models that distributional semantics claims to be able to solve;
  2. The promise of distributional semantics linking to multimodal representations
  3. The current limitations of distributional semantics theories to account for linguistic compositionality;
  4. The absence of any robust first-order models of inference for distributional semantics;
  5. The integration of distributional semantic principles and techniques into a broader dynamic model theoretic framework.