05.03.17 - 10.03.17, Seminar 17101

Databases on Future Hardware

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.


It was in the late 1990s, when some researchers realized that database management systems were particularly affected by the ongoing hardware evolution. More than any other class of software, databases suffered from the widening gap between memory and CPU performance--just when main memories began to grow large enough to keep meaningful data sets entirely in RAM. Yet, it took another 15 years until that insight found its way into actual systems, such as SAP HANA, Microsoft's Apollo/Hekaton, IBM DB2 BLU, or Oracle Database In-Memory.

Right now, hardware technology is again at a crossroad that will disrupt the way we build and use database systems. Heterogeneous system architectures will replace the prevalent multi-core designs and leverage the dark silicon principle to combat power limitations. Non-volatile memories bring persistence at the speed of current DRAM chips. High-speed interconnects allow for parallelism at unprecedented scales--but also force software to deal with distributed systems characteristics (e.g., locality, unreliability).

It is not clear yet, how precisely the new systems are going to look like-hardware makers are still figuring out which configurations will yield the best price/performance trade-offs. Nor is it clear how redesigned software could look like to take advantage of the new hardware.

The hard- and software disciplines have evolved as mostly independent communities in the past years. The key goal of this Dagstuhl seminar is to bring them back together. We plan to have intensive and open-minded discussions, with representatives from the relevant areas. During the 5-day workshop, hardware architects, system designers, experts in query processing and transaction management as well as experts in operating systems and networking will discuss the challenges and opportunities involved, so hard- and software can evolve together, rather than only individually and independently.

The seminar will live Dagstuhl's open discussion format. Possible topics for discussions could be

  • Shared, cache-coherent memory vs. “distributed system in a box”?
  • What are the sweet spots to balance the characteristics of novel hardware components (e.g., latency, capacity/cost, reliability for non-volatile memories; bandwidth and latency for interconnect networks)?
  • Co-Processors (“accelerators”) - Which role will they play in tomorrow's data-intensive systems? What is the best way to integrate them into the rest of the hard- and software architecture?
  • The characteristics of modern storage technologies are often far away from the classical assumptions of database systems: non-volatile memories offer persistent, yet byte-addressable memory; network storage technologies might allow for new system architectures; etc. What are the consequences on indexing, data access, or recovery mechanisms?
  • Networks have evolved from a very slow communication medium to powerful, very high-speed, and often even intelligent interconnects (e.g., InfiniBand, RDMA). How can we embrace these technologies in the end-to-end system?

Creative Commons BY 3.0 Unported license
Gustavo Alonso, Michaela Blott, and Jens Teubner