TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 20352

Security of Machine Learning Cancelled

( 23. Aug – 28. Aug, 2020 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/20352

Ersetzt durch
Dagstuhl-Seminar 22281: Security of Machine Learning (2022-07-10 - 2022-07-15) (Details)

Organisatoren

Kontakt

Motivation

Modern technologies based on machine learning, including deep neural networks trained on massive amounts of labeled data, have reported impressive performances on a variety of application domains. These range from classical pattern recognition tasks, for example, speech and object recognition for self-driving cars and robots, to more recent cybersecurity tasks such as spam and malware detection. Despite the unprecedented success of technologies based on machine learning, it has been shown that they suffer from vulnerabilities and data leaks. For example, several machine-learning algorithms can be easily fooled by adversarial examples, i.e., carefully-perturbed input samples aimed to thwart a correct prediction. These insecurities pose a severe threat in a variety of applications: the object recognition systems used by robots and self-driving cars can be misled into seeing things that are not there, audio signals can be modified to confound automated speech-to-text transcriptions, and personal data may be extracted from learning models of medical diagnosis systems.

In response to these threats, the research community has investigated various defensive methods that can be used to strengthen the current machine learning approaches. Evasion attacks can be mitigated by the use of robust optimization and game-theoretical learning frameworks, to explicitly account for the presence of adversarial data manipulations during the learning process. Rejection or explicit detection of adversarial attacks also provides an interesting research direction to mitigate this threat. Poisoning attacks can be countered by applying robust learning algorithms that natively account for the presence of poisoning samples into the training data as well as by using ad-hoc data-sanitization techniques. Nevertheless, most of the proposed defenses are based on heuristics and lack formal guarantees about their performance when deployed in the real world.

Another related issue is that it becomes increasingly hard to understand whether a complex system learns meaningful patterns from data or just spurious correlations. To facilitate trust in predictions of learning systems, explainability of machine learning becomes a highly desirable property. Despite recent progress in development of explanation techniques for machine learning, understanding how such explanations can be used to assess the security properties of learning algorithms still remains an open, challenging problem.

This Dagstuhl Seminar aims to bring together researchers from diverse set of backgrounds to discuss research directions that could lead to the scientific foundation for security of machine learning. We focus the seminar on four key topics.

  • Attacks against machine learning: What attacks are most likely to be seen in practice? How do existing attacks fail to meet those requirements? In what other domains (i.e., not images) will attack be seen?
  • Defenses for machine learning: Can machine learning be secure in all settings? What threat models are most likely to occur in practice? Can defenses be designed to be practically useful in these settings?
  • Foundations of secure learning: Can we formalize “adversarial robustness”? How should theoretical foundations of security of machine learning be built? What kind of theoretical guarantees can be expected and how do they differ from traditional theoretical instruments of machine learning?
  • Explainability of machine learning: What is the relationship between attacks and explanations? Can interpretation be trusted?

By bringing together researchers from machine learning and security communities, the workshop is expected to generate new ideas for security assessment and design in the field of machine learning.

Copyright Battista Biggio, Nicholas Carlini, Pavel Laskov, and Konrad Rieck

Teilnehmer
  • Battista Biggio (University of Cagliari, IT) [dblp]
  • Nicholas Carlini (Google Brain - Mountain View, US)
  • Pavel Laskov (Universität Liechtenstein, LI) [dblp]
  • Konrad Rieck (TU Braunschweig, DE) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • computer graphics / computer vision
  • security / cryptology

Schlagworte
  • machine learning
  • adversarial examples
  • information security