29.11.15 - 04.12.15, Seminar 15491

Approximate and Probabilistic Computing: Design, Coding, Verification

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.


Computing has entered the era of approximation, in which hardware and software operate on and reason about estimates. For example, a navigation application turns maps and location estimates from hardware GPS sensors into driving directions; a speech recognition application turns an analog signal into a likely sentence; search turns queries into information; network protocols deliver unreliable messages; and recent research shows promise that approximate hardware and software can profitably trade result quality for energy efficiency. Millions of people already use software that computes with and reasons approximate/probabilistic data daily. These complex systems require sophisticated algorithms to deliver accurate answers quickly, at scale, and with energy efficiency, and approximation is often the only way to meet these competing goals.

Despite their ubiquity, economic significance, and societal impact, building such applications is difficult and requires not only expertise across the system stack, but also expertise in statistics and application-specific domain knowledge. Non-expert developers need tools and expertise to help them design, code, and verify these complex systems.

Several research communities are independently investigating methodologies and techniques to model, analyze, and manage uncertainty in and throughout software systems.

Probabilistic model checking is widely used to support the design and analysis of computer systems, and has been rapidly gaining in importance in recent years. Traditionally, models such as Markov chains have been used to analyze system performance, where typically queuing theory is applied to obtain quantitative characteristics. Probabilistic modeling is also needed to quantify unreliable or unpredictable behavior, for example in fault-tolerant systems and communication protocols, where properties such as component failure and packet loss can be described probabilistically.

However, the applications of probabilistic model checking to software engineering is at the current stage mostly limited to early phases of development, where design models are translated in a more or less automatic way to corresponding stochastic models. Nonetheless, design models are hard to keep consistent with implementation, where code artifacts are in general only partially compliant with their intended design. Quantitative software analysis aims at bringing the verification of probabilistic concerns at code level, designing procedures operating directly on code artifacts. This extends the scope of probabilistic analysis to later stages of development processes, potentially supporting also wide spreading agile methods whose core artifacts are often code.

While quantitative program analysis is focused on general programs dealing with probabilistic phenomena (e.g., unpredictable interaction with users), probabilistic programming makes uncertainty a first-class concept and thus enables probabilistic inference. Probabilistic programming languages augment existing programming languages with probabilistic primitives. The major goal of these languages is the efficient implementation of probabilistic inference, which combines a model (written in the probabilistic programming language) with observed evidence to infer a distribution over variables in the program in light of that evidence. These languages abstract the details of inference, and so see frequent use by machine learning experts when building their models, as well as professionals required to model physical, social, and economic phenomena requiring an explicit account for uncertainty while elaborating computational models.

To tackle the challenge of uncertainty through the whole stack of complex computing systems, approximate computing is an emerging research area that focuses on devising systematic approaches for automating development of approximate software that runs on today's commodity and approximate hardware, or tomorrow's more exotic approximate hardware. The goal of approximate computing is to empower a developer with the understanding of how approximate hardware and software affect the application's accuracy results and to automate the management of accuracy, energy consumption, and performance. To achieve this goal, approximate computing is by nature a multidisciplinary field bringing together researchers from software systems - programming languages and software engineering - and hardware systems - circuit design and hardware architecture.

The aim of this seminar is to bring together academic and industrial researchers from the areas of probabilistic model checking, quantitative software analysis, probabilistic programming, and approximate computing to share their recent progresses, identify the main challenges, and foster collaborations on common interests and problems.