August 11 – 16 , 2019, Dagstuhl Seminar 19331

Software Protection Decision Support and Evaluation Methodologies


Christian Collberg (University of Arizona – Tucson, US)
Mila Dalla Preda (University of Verona, IT)
Bjorn De Sutter (Ghent University, BE)
Brecht Wyseur (NAGRA Kudelski Group SA – Cheseaux, CH)

For support, please contact

Susanne Bach-Bernhard for administrative matters

Andreas Dolzmann for scientific matters


List of Participants
Shared Documents


This Dagstuhl Seminar addresses open challenges in developing a holistic, generally applicable methodology and tool support to model and evaluate the strength of software protections as defenses against man-at-the-end attacks such as reverse-engineering and software tampering. Such a methodology and supporting tools are necessary to (partially) automate the selection and deployment of techniques that protect the confidentiality and integrity of various types of assets embedded in software.

The seminar unites academic and industrial experts from multiple backgrounds such as hacking and penetration testing, experimental software engineering, software complexity theory, game theory, machine learning, formal modeling, software analysis (tools), software protection, (white-box) cryptography, and information security economics. Bringing these experts together is necessary to ensure that the proposed models and evaluation techniques can be experimentally validated, that they can be automated in a user-friendly, trustworthy, and scalable manner, and that they cover a wide range of realworld attack methods, available protections, assets, and security requirements.

Metrics Some qualitative and quantitative, measurable features of software have been put forward to evaluate relevant aspects of software protections, such as the potency, resilience, and stealth of obfuscations. Most proposed metrics are ad hoc, however. They are not validated against real-world attacks and are not generally applicable. It is necessary to develop a wider set of metrics that can be validated, that can be evaluated by supporting tools, and that cover the widest possible range of protection techniques and attacks. Such metrics and supporting tools can facilitate decision support, but also provide a standardized toolbox for researchers to demonstrate the value of novel protections they develop.

Attack and protection modeling Knowledge bases, attack graphs, Petri Nets, ontologies, and relational models have been proposed to reason about attacks and protections' impact thereon. The feasibility to automatically generate relevant, complete model instances for real-world use cases has not been demonstrated, however. Neither is it clear what the best abstraction level is to let the models support automated comparisons and selections of protections. The seminar aims for refining, extending, and augmenting modeling and reasoning approaches, borrowing from other security domains and adversarial settings where possible, to enable quantitative and qualitative assessment of the impact of protections on the relevant attack paths on concrete software assets.

Decision Support Limited approaches have been presented for automating the selection of protections given concrete assets and their security requirements, available protections, and software development lifecycle requirements. Those approaches are anything but ready for real-world deployment. For example, they currently cannot handle synergies or composability issues of protection combinations well. In this seminar, we will combine existing knowledge in this area with expertise in game theory, machine learning, and security economics to develop new protection design space exploration approaches that build on the aforementioned metrics, models, and reasoning approaches to assist defenders of assets.

Validation To provide trustworthy decision support, the underlying metrics and models need to be validated. Validation experiments (controlled or not) are expensive, however, and extremely hard to design and execute correctly. Moreover, it is hard to extrapolate from pointwise validation samples. In this seminar, we will combine experience in penetration testing, in experimental software engineering and in the industrial use of protection techniques to draft best practices for validation experiments and a roadmap for useful validation strategies to set us on a path towards trustworthy decision support.

Motivation text license
  Creative Commons BY 3.0 DE
  Christian Collberg, Mila Dalla Preda, Bjorn De Sutter, and Brecht Wyseur


  • Security / Cryptology


  • Man-at-the-end attacks
  • Software protection
  • Predictive models
  • Metrics
  • Reverse engineering and tampering


In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.


Download overview leaflet (PDF).


Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.