TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 22281

Security of Machine Learning

( 10. Jul – 15. Jul, 2022 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/22281

Organisatoren

Kontakt



Programm

Summary

Oveview

Modern technologies based on machine learning, including deep neural networks trained on massive amounts of labeled data, have reported impressive performances on a variety of application domains. These range from classical pattern recognition tasks, for example, speech and object recognition for self-driving cars and robots, to more recent cybersecurity tasks, such as attack and malware detection. Despite the unprecedented success of technologies based on machine learning, it has been shown that they suffer from vulnerabilities and data leaks. For example, several machine-learning algorithms can be easily fooled by adversarial examples, that is, carefully-perturbed input samples aimed to thwart a correct prediction. These insecurities pose a severe threat in a variety of applications: the object recognition systems used by robots and self-driving cars can be misled into seeing things that are not there, audio signals can be modified to confound automated speech-to-text transcriptions, and personal data may be extracted from learning models of medical diagnosis systems.

In response to these threats, the research community has investigated various defensive methods that can be used to strengthen current machine learning approaches. Evasion attacks can be mitigated by the use of robust optimization and game-theoretical learning frameworks, to explicitly account for the presence of adversarial data manipulations during the learning process. Rejection or explicit detection of adversarial attacks also provides an interesting research direction to mitigate this threat. Poisoning attacks can be countered by applying robust learning algorithms that natively account for the presence of poisoning samples in the training data as well as by using ad-hoc data-sanitization techniques. Nevertheless, most of the proposed defenses are based on heuristics and lack formal guarantees about their performance when deployed in the real world.

Another related issue is that it becomes increasingly hard to understand whether a complex system learns meaningful patterns from data or just spurious correlations. To facilitate trust in predictions of learning systems, the explainability of machine learning becomes a highly desirable property. Despite recent progress in development of explanation techniques for machine learning, understanding how such explanations can be used to assess the security properties of learning algorithms still remains an open and challenging problem.

This Dagstuhl Seminar aims to bring together researchers from a diverse set of backgrounds to discuss research directions that could lead to the scientific foundation for the security of machine learning.

Goal of the Seminar

The seminar focused on four main themes of discussion, consistently with the research directions reported in the previous section:

  • Attacks against machine learning: What attacks are most likely to be seen in practice? How do existing attacks fail to meet those requirements? In what other domains (i.e., not images) will attack be seen?
  • Defenses for machine learning: Can machine learning be secure in all settings? What threat models are most likely to occur in practice? Can defenses be designed to be practically useful in these settings?
  • Foundations of secure learning: Can we formalize "adversarial robustness"? How should theoretical foundations of security of machine learning be built? What kind of theoretical guarantees can be expected and how do they differ from traditional theoretical instruments of machine learning?
  • Explainability of machine learning: What is the relationship between attacks and explanations? Can interpretation be trusted?

Overall Organization and Schedule

The seminar intends to combine the advantages of conventional conference formats with the peculiarities and specific traditions of Dagstuhl events. The seminar activities were scheduled as follows:

Copyright Battista Biggio, Nicholas Carlini, Pavel Laskov, and Konrad Rieck

Motivation

Modern technologies based on machine learning, including deep neural networks trained on massive amounts of labeled data, have reported impressive performances on a variety of application domains. These range from classical pattern recognition tasks, for example, speech and object recognition for self-driving cars and robots, to more recent cybersecurity tasks such as spam and malware detection. Despite the unprecedented success of technologies based on machine learning, it has been shown that they suffer from vulnerabilities and data leaks. For example, several machine-learning algorithms can be easily fooled by adversarial examples, i.e., carefully-perturbed input samples aimed to thwart a correct prediction. These insecurities pose a severe threat in a variety of applications: the object recognition systems used by robots and self-driving cars can be misled into seeing things that are not there, audio signals can be modified to confound automated speech-to-text transcriptions, and personal data may be extracted from learning models of medical diagnosis systems.

In response to these threats, the research community has investigated various defensive methods that can be used to strengthen the current machine learning approaches. Evasion attacks can be mitigated by the use of robust optimization and game-theoretical learning frameworks, to explicitly account for the presence of adversarial data manipulations during the learning process. Rejection or explicit detection of adversarial attacks also provides an interesting research direction to mitigate this threat. Poisoning attacks can be countered by applying robust learning algorithms that natively account for the presence of poisoning samples in the training data as well as by using ad-hoc data-sanitization techniques. Nevertheless, most of the proposed defenses are based on heuristics and lack formal guarantees about their performance when deployed in the real world.

Another related issue is that it becomes increasingly hard to understand whether a complex system learns meaningful patterns from data or just spurious correlations. To facilitate trust in predictions of learning systems, explainability of machine learning becomes a highly desirable property. Despite recent progress in development of explanation techniques for machine learning, understanding how such explanations can be used to assess the security properties of learning algorithms still remains an open, challenging problem.

This Dagstuhl Seminar aims to bring together researchers from diverse sets of backgrounds to discuss research directions that could lead to the scientific foundation for security of machine learning. We focus the seminar on four key topics.

  • Attacks against machine learning: What attacks are most likely to be seen in practice? How do existing attacks fail to meet those requirements? In what other domains (i.e., not images) will attacks be seen?
  • Defenses for machine learning: Can machine learning be secure in all settings? What threat models are most likely to occur in practice? Can defenses be designed to be practically useful in these settings?
  • Foundations of secure learning: Can we formalize “adversarial robustness”? How should theoretical foundations of security of machine learning be built? What kind of theoretical guarantees can be expected and how do they differ from traditional theoretical instruments of machine learning?
  • Explainability of machine learning: What is the relationship between attacks and explanations? Can interpretation be trusted?

By bringing together researchers from machine learning and security communities, the seminar is expected to generate new ideas for security assessment and design in the field of machine learning.

Copyright Battista Biggio, Nicholas Carlini, Pavel Laskov, and Konrad Rieck

Teilnehmer
  • Hyrum Anderson (Robust Intelligence - San Francisco, US) [dblp]
  • Giovanni Apruzzese (Universität Liechtenstein, LI)
  • Verena Battis (Fraunhofer SIT - Darmstadt, DE)
  • Battista Biggio (University of Cagliari, IT) [dblp]
  • Wieland Brendel (Universität Tübingen, DE)
  • Nicholas Carlini (Google - Mountain View, US)
  • Antonio Emanuele Cinà (University of Venice, IT)
  • Thorsten Eisenhofer (Ruhr-Universität Bochum, DE)
  • Asja Fischer (Ruhr-Universität Bochum, DE) [dblp]
  • Marc Fischer (ETH Zürich, CH)
  • David Freeman (Facebook - Menlo Park, US) [dblp]
  • Kathrin Grosse (University of Cagliari, IT) [dblp]
  • Pavel Laskov (Universität Liechtenstein, LI) [dblp]
  • Aikaterini Mitrokotsa (Universität St. Gallen, CH) [dblp]
  • Seyed Mohsen Moosavi-Dezfooli (Imperial College London, GB) [dblp]
  • Nicola Paoletti (Royal Holloway, University of London, GB) [dblp]
  • Giancarlo Pellegrino (CISPA - Saarbrücken, DE) [dblp]
  • Fabio Pierazzi (King's College London, GB)
  • Maura Pintor (University of Cagliari, IT)
  • Konrad Rieck (TU Braunschweig, DE) [dblp]
  • Kevin Alejandro Roundy (NortonLifeLock - Culver City, US) [dblp]
  • Lea Schönherr (CISPA - Saarbrücken, DE) [dblp]
  • Vitaly Shmatikov (Cornell Tech - New York, US) [dblp]
  • Nedim Srndic (Huawei Technologies - München, DE) [dblp]

Klassifikation
  • Artificial Intelligence
  • Cryptography and Security
  • Machine Learning

Schlagworte
  • machine learning
  • adversarial examples
  • information security