TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Perspektiven-Workshop 12371

Machine Learning Methods for Computer Security

( 09. Sep – 14. Sep, 2012 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/12371

Organisatoren

Koordinator

Kontakt




Programm

Press Room

Summary

Arising organically from a variety of independent research projects in both computer security and machine learning, the topic of machine learning methods for computer security is emerging as a major direction of research that offers new challenges to both communities. Learning approaches are particularly advantageous for security applications designed to counter sophisticated and evolving adversaries because they are designed to cope with large data tasks that are too complex for hand-crafted solutions or need to dynamically evolve. However, in adversarial settings, the assets of learning can potentially be subverted by malicious manipulation of the learner's environment. This exposes applications that use learning techniques to a new type of security vulnerability in which an adversary can adapt to counter learning-based methods. Thus, unlike most application domains, computer security applications present a unique data domain that requires careful consideration of its adversarial nature to provide adequate learning-based solutions---a challenge requiring novel learning methods and domain-specific application design and analysis. The Perspectives Workshop, ``Machine Learning Methods for Computer Security'', brought together prominent researchers from the computer security and machine learning communities interested in furthering the state-of-the-art for this fusion research to discuss open problems, foster new research directions, and promote further collaboration between the two communities.

This workshop focused on tasks in three main topics: the role of learning in computer security applications, the paradigm of secure learning, and the future applications for secure learning. In the first group, participants discussed the current usage of learning approaches by security practitioners. The second group focused of the current approaches and challenges for learning in security-sensitive adversarial domains. Finally, the third group sought to identify future application domains, which would benefit from secure learning technologies.

Within this emerging field several recurrent themes arose throughout the workshop. A major concern that was discussed throughout the workshop was an uneasiness with machine learning and a reluctance to use learning within security applications and, to address this problem, participants identified the need for learning methods to provide better transparency, interpretability, and trust. Further, many workshop attendees raised the issue of how human operators could be incorporated into the learning process to guide it, interpret its results, and prevent unintended consequences, thus reinforcing the need for transparency and interpretability of these methods. On the learning side, researchers discussed how an adversary should be properly incorporated into a learning framework and how the algorithms can be designed in a game-theoretic manner to provide security guarantees. Finally, participants also identified the need for a proper characterization of a security objective for learning and for benchmarks for assessing an algorithm's security.


Teilnehmer
  • Battista Biggio (University of Cagliari, IT) [dblp]
  • Christian Bockermann (TU Dortmund, DE)
  • Michael Brückner (SoundCloud Ltd., DE)
  • Alvaro Cárdenas Mora (University of Texas at Dallas, US) [dblp]
  • Christos Dimitrakakis (EPFL - Lausanne, CH) [dblp]
  • Felix Freiling (Universität Erlangen-Nürnberg, DE) [dblp]
  • Giorgio Fumera (University of Cagliari, IT) [dblp]
  • Giorgio Giacinto (University of Cagliari, IT)
  • Rachel Greenstadt (Drexel Univ. - Philadelphia, US) [dblp]
  • Anthony D. Joseph (University of California - Berkeley, US) [dblp]
  • Robert Krawczyk (BSI - Bonn, DE)
  • Pavel Laskov (Universität Tübingen, DE) [dblp]
  • Richard P. Lippmann (MIT Lincoln Laboratory - Lexington, US)
  • Daniel Lowd (University of Oregon - Eugene, US) [dblp]
  • Aikaterini Mitrokotsa (EPFL - Lausanne, CH) [dblp]
  • Sasa Mrdovic (University of Sarajevo, BA)
  • Blaine Nelson (Univ. Tübingen, DE)
  • Patrick Pak Kei Chan (South China University of Technology, CN)
  • Massimiliano Raciti (Linköping University, SE)
  • Nathan Ratliff (Google - Pittsburgh, US)
  • Konrad Rieck (Universität Göttingen, DE) [dblp]
  • Fabio Roli (University of Cagliari, IT) [dblp]
  • Benjamin I. P. Rubinstein (Microsoft Corp. - Mountain View, US) [dblp]
  • Tobias Scheffer (Universität Potsdam, DE) [dblp]
  • Galina Schwartz (University of California - Berkeley, US) [dblp]
  • Nedim Srndic (Universität Tübingen, DE) [dblp]
  • Radu State (University of Luxembourg, LU) [dblp]
  • Doug Tygar (University of California - Berkeley, US) [dblp]
  • Viviane Zwanger (Universität Erlangen-Nürnberg, DE)

Klassifikation
  • Artificial Intelligence / Robotics
  • Security / Cryptography

Schlagworte
  • Adversarial Learning
  • Computer Security
  • Robust Statistical Learning
  • Online Learning with Experts
  • Game Theory
  • Learning Theory