https://www.dagstuhl.de/19231

June 2 – 7 , 2019, Dagstuhl Seminar 19231

Empirical Evaluation of Secure Development Processes

Organizers

Adam Shostack (Seattle, US)
Matthew Smith (Universität Bonn and Fraunhofer FKIE, DE)
Sam Weber (Carnegie Mellon University – Pittsburgh, US)
Mary Ellen Zurko (MIT Lincoln Laboratory – Lexington, US)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 9, Issue 6 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents
Dagstuhl Seminar Wiki
Dagstuhl Seminar Schedule [pdf]

(Use seminar number and access code to log in)

Summary

The problem of how to design and build secure systems has been long-standing. For example, as early as 1978 Bisbey and Hollingworth[6] complained that there was no method of determining what an appropriate level of security for a system actually was. In the early years various design principles, architectures and methodologies were proposed: in 1972 Anderson[5] described the “reference monitor” concept, in 1974 Saltzer[7] described the “Principle of least privilege”, and in 1985 the US Department of Defense issued the Trusted Computer System Evaluation Criteria[8].

Since then, although much progress has been made in software engineering, cybersecurity and industrial practices, much of the fundamental scientific foundations have not been addressed – there is little empirical data to quantify the effects that these principles, architectures and methodologies have on the resulting systems.

This situation leaves developers and industry in a rather undesirable situation. The lack of this data makes it difficult for organizations to effectively choose practices that will cost-effectively reduce security vulnerabilities in a given system and help development teams achieve their security objectives. There has been much work creating security development lifecycles, such as the Building Security In Maturity Model[1], Microsoft Security Development LifeCycle[3] OWASP[4] and ISECOM[2] and these incorporate a long series of recommended practices on requirements analysis, architectural threat analysis, and hostile code review. It is agreed that these efforts are, in fact, beneficial. However, without answers as to why they are beneficial, and how much, it is extremely difficult for organizations to rationally improve these processes, or to evaluate the cost-effectiveness of any specific technique.

The ultimate goal of this seminar was to create a community for empirical science in software engineering for secure systems. This is particularly important in this nascent of research in this domain stage since there is no venue in which researchers meet and exchange. Currently single pieces of work are published at a wide variety of venues such as IEEE S&P, IEEE EuroS&P, ACM CCS, USENIX Security, SOUPS, SIGCHI, ICSE, USEC, EuroUSEC, and many more. The idea was that bringing together all researchers working separately and creating an active exchange will greatly benefit the community.

Naturally, community-building is a long-term activity – we can initiate it at a Dagstuhl seminar, but it will require continuous activity. Our more immediate goals were to develop a manifesto for the community elucidating the need for research in this area, and to provide actionable and concrete guidance on how to overcome the obstacles that have hindered progress.

One aspect of this was information gathering on how to conduct academic research which is able to be transitioned and consumed by developers. We felt that all too frequently developer needs aren’t fully understood by academics, and that developers underestimate the relevance of academic results. Our information gathering will help foster mutual understanding between these two groups and we specifically looked for ways to build bridges between them.

A second obstacle which we aimed to address is how to produce sufficiently convincing empirical research at a foundational level as well as in the specific application areas. Currently there is no consensus on what are ecologically valid studies and there are sporadic debates on the merits of the different approaches. This seminar included a direct and focused exchange of experience and facilitated the creation of much needed guidelines for researchers. In accordance with our bridge building, we also looked at what developers find convincing, and how that aligns with research requirements.

Seminar Format

Our seminar brought together thirty-three participants from industry, government and both the security and software engineering academic communities. Before the seminar started we provided participants with the opportunity to share background readings amongst themselves.

We began our seminar with level-setting and foundational talks from industrial, software engineering and security participants aimed to foster a common level of understanding of the differing perspectives of the various communities.

Following this the seminar was very dynamic: during each session we broke into break-out groups whose topics were dynamically generated by the participants. The general mandate for each group was to tackle an aspect of the general problem and be actionable and concrete: we wished to avoid vague discussions of the difficulties involved with studying secure development but instead focus on how to improve our understanding and knowledge. After each session we met again as a group and summarized each group’s progress.

At the conclusion of the seminar we brought together all the participants in a general discussion about further activities. In all, a total of eighteen further activities, ranging from papers to research guideline documents, were proposed and organized by the participants.

References

  1. Building security in maturity model. http://www.bsimm.com/.
  2. Isecom. http://www.isecom.org.
  3. Microsoft security development lifecycle. https://www.microsoft.com/en-us/securityengineering/sdl/.
  4. Owasp. https://www.owasp.org.
  5. Anderson, J. P. Computer Security Technology Planning Study, Volume II. Tech. Rep. ESD-TR-73-51, 1972.
  6. Bisbey, R., and Hollingworth, D. Protection analysis: Final report. Information Sciences Institute, University of Southern California: Marina Del Rey, CA, USA, Technical Report ISI/SR-78-13 (1978).
  7. Saltzer, J. Protection and the control of information sharing in Multics. Communications of the ACM 17, 7 (1974), 388–402.
  8. United States Department of Defense. Trusted computer system evaluation criteria ( orange book ).
Summary text license
  Creative Commons BY 3.0 Unported license
  Adam Shostack, Matthew Smith, Sam Weber, and Mary Ellen Zurko

Classification

  • Security / Cryptology
  • Society / Human-computer Interaction
  • Software Engineering

Keywords

  • Empirical software engineering
  • Usable security for developers

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.