https://www.dagstuhl.de/19021

January 6 – 11 , 2019, Dagstuhl Seminar 19021

Joint Processing of Language and Visual Data for Better Automated Understanding

Organizers

Yun Fu (Northeastern University – Boston, US)
Marie-Francine Moens (KU Leuven, BE)
Lucia Specia (Imperial College London, GB)
Tinne Tuytelaars (KU Leuven, BE)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 9, Issue 1 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents
Dagstuhl Seminar Schedule [pdf]

Summary

The joint processing of language and visual data has recently received a lot of attention. This emerging research field is stimulated by the active development of deep learning algorithms. For instance, deep neural networks (DNNs) offer numerous opportunities to learn mappings between the visual and language media and to learn multimodal representations of content. Furthermore, deep learning recently has become a standard approach for automated image and video captioning and for visual question answering, the former referring to the automated description of images or video with descriptions in natural language sentences, the latter to the automated formulation of an answer in natural language to a question in natural language about an image.

Apart from aiding image understanding and the indexing and search of image and video data through the natural language descriptions, the field of jointly processing language and visual data builds algorithms for grounded language processing where the meaning of natural language is based on perception and/or actions in the world. Grounded language processing contributes to automated language understanding and machine translation of language. Recently, it has been shown that visual data provide world and common-sense knowledge that is needed in automated language understanding.

Joint processing of language and visual data is also interesting from a theoretical point of view for developing theories on the complementarity of such data in human(-machine) communication, for developing suitable algorithms for learning statistical knowledge representations informed by visual and language data, and for inferencing with these representations.

Given the current trend and results of multimodal (language and vision) research, it can be safely assumed that the joint processing of language and visual data will only gain in importance in the future. During the seminar we have discussed theories, methodologies and real-world technologies for joint processing of language and vision, particularly in the following research areas:

  • Theories of integrated modelling and representation learning of language and vision for computer vision and natural language processing tasks;
  • Explainability and interpretability of the learned representations;
  • Fusion and inference based on visual, language and multimodal representations;
  • Understanding human language and visual content;
  • Generation of language and visual content;
  • Relation to human learning;
  • Datasets and tasks.

The discussions have attempted to give an answer to the following research questions (a non-exhaustive list):

  • Which machine learning architectures will be best suited for the above tasks?
  • How to learn multimodal representations that are relational and structured in nature to allow a structured understanding?
  • How to generalize to allow recognitions that have few or zero examples in training?
  • How to learn from limited paired data but exploiting monomodal models trained on visual or language data?
  • How to explain the neural networks when they are trained for image or language understanding?
  • How to disentangle the representations: factorization to separate the different factors of variation and discovering of their meaning?
  • How to learn continuous representations that describe semantics and that integrate world and common-sense knowledge?
  • How to reason with the continuous representations?
  • How to translate to another modality?
  • What would be effective novel evaluation metrics?

This Dagstuhl Seminar has brought together an interdisciplinary group of researchers from computer vision, natural language processing, machine learning and artificial intelligence to discuss the latest scientific realizations and to develop a roadmap and research agenda.

Summary text license
  Creative Commons BY 3.0 Unported license
  Marie-Francine Moens, Lucia Specia, and Tinne Tuytelaars

Classification

  • Artificial Intelligence / Robotics
  • Computer Graphics / Computer Vision
  • Multimedia

Keywords

  • Image and video captioning
  • Human language grounding
  • Visual understanding
  • Language understanding
  • Language generation
  • Generation of visuals
  • World and common sense knowledge
  • Deep learning

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.