https://www.dagstuhl.de/19331

11. – 16. August 2019, Dagstuhl Seminar 19331

Software Protection Decision Support and Evaluation Methodologies

Organisatoren

Christian Collberg (University of Arizona – Tucson, US)
Mila Dalla Preda (University of Verona, IT)
Bjorn De Sutter (Ghent University, BE)
Brecht Wyseur (NAGRA Kudelski Group SA – Cheseaux, CH)

Auskunft zu diesem Dagstuhl Seminar erteilen

Susanne Bach-Bernhard zu administrativen Fragen

Andreas Dolzmann zu wissenschaftlichen Fragen

Dokumente

Programm des Dagstuhl Seminars (Hochladen)

(Zum Einloggen bitte Seminarnummer und Zugangscode verwenden)

Motivation

This Dagstuhl Seminar addresses open challenges in developing a holistic, generally applicable methodology and tool support to model and evaluate the strength of software protections as defenses against man-at-the-end attacks such as reverse-engineering and software tampering. Such a methodology and supporting tools are necessary to (partially) automate the selection and deployment of techniques that protect the confidentiality and integrity of various types of assets embedded in software.

The seminar unites academic and industrial experts from multiple backgrounds such as hacking and penetration testing, experimental software engineering, software complexity theory, game theory, machine learning, formal modeling, software analysis (tools), software protection, (white-box) cryptography, and information security economics. Bringing these experts together is necessary to ensure that the proposed models and evaluation techniques can be experimentally validated, that they can be automated in a user-friendly, trustworthy, and scalable manner, and that they cover a wide range of realworld attack methods, available protections, assets, and security requirements.

Metrics Some qualitative and quantitative, measurable features of software have been put forward to evaluate relevant aspects of software protections, such as the potency, resilience, and stealth of obfuscations. Most proposed metrics are ad hoc, however. They are not validated against real-world attacks and are not generally applicable. It is necessary to develop a wider set of metrics that can be validated, that can be evaluated by supporting tools, and that cover the widest possible range of protection techniques and attacks. Such metrics and supporting tools can facilitate decision support, but also provide a standardized toolbox for researchers to demonstrate the value of novel protections they develop.

Attack and protection modeling Knowledge bases, attack graphs, Petri Nets, ontologies, and relational models have been proposed to reason about attacks and protections' impact thereon. The feasibility to automatically generate relevant, complete model instances for real-world use cases has not been demonstrated, however. Neither is it clear what the best abstraction level is to let the models support automated comparisons and selections of protections. The seminar aims for refining, extending, and augmenting modeling and reasoning approaches, borrowing from other security domains and adversarial settings where possible, to enable quantitative and qualitative assessment of the impact of protections on the relevant attack paths on concrete software assets.

Decision Support Limited approaches have been presented for automating the selection of protections given concrete assets and their security requirements, available protections, and software development lifecycle requirements. Those approaches are anything but ready for real-world deployment. For example, they currently cannot handle synergies or composability issues of protection combinations well. In this seminar, we will combine existing knowledge in this area with expertise in game theory, machine learning, and security economics to develop new protection design space exploration approaches that build on the aforementioned metrics, models, and reasoning approaches to assist defenders of assets.

Validation To provide trustworthy decision support, the underlying metrics and models need to be validated. Validation experiments (controlled or not) are expensive, however, and extremely hard to design and execute correctly. Moreover, it is hard to extrapolate from pointwise validation samples. In this seminar, we will combine experience in penetration testing, in experimental software engineering and in the industrial use of protection techniques to draft best practices for validation experiments and a roadmap for useful validation strategies to set us on a path towards trustworthy decision support.

License
  Creative Commons BY 3.0 DE
  Christian Collberg, Mila Dalla Preda, Bjorn De Sutter, and Brecht Wyseur

Classification

  • Security / Cryptology

Keywords

  • Man-at-the-end attacks
  • Software protection
  • Predictive models
  • Metrics
  • Reverse engineering and tampering

Buchausstellung

Bücher der Teilnehmer 

Buchausstellung im Erdgeschoss der Bibliothek

(nur in der Veranstaltungswoche).

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.