OASIcs, Volume 100

13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)



Thumbnail PDF

Event

PARMA-DITAM 2022, June 22, 2022, Budapest, Hungary

Editors

Francesca Palumbo
  • University of Sassari, Italy
João Bispo
  • University of Porto, Portugal
Stefano Cherubin
  • Edinburgh Napier University, UK

Publication Details

  • published at: 2022-06-08
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-231-0
  • DBLP: db/conf/hipeac/parma2022

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 100, PARMA-DITAM 2022, Complete Volume

Authors: Francesca Palumbo, João Bispo, and Stefano Cherubin


Abstract
OASIcs, Volume 100, PARMA-DITAM 2022, Complete Volume

Cite as

13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 1-104, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Proceedings{palumbo_et_al:OASIcs.PARMA-DITAM.2022,
  title =	{{OASIcs, Volume 100, PARMA-DITAM 2022, Complete Volume}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{1--104},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022},
  URN =		{urn:nbn:de:0030-drops-161152},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022},
  annote =	{Keywords: OASIcs, Volume 100, PARMA-DITAM 2022, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Francesca Palumbo, João Bispo, and Stefano Cherubin


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{palumbo_et_al:OASIcs.PARMA-DITAM.2022.0,
  author =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{0:i--0:viii},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.0},
  URN =		{urn:nbn:de:0030-drops-161168},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures (Invited Talk)

Authors: Antonino Tumeo, Nicolas Bohm Agostini, Serena Curzel, Ankur Limaye, Cheng Tan, Vinay Amatya, Marco Minutoli, Vito Giovanni Castellana, Ang Li, and Joseph Manzano


Abstract
Modern data analysis applications are complex workflows composed of algorithms with diverse behaviors. They may include digital signal processing, data filtering, reduction, compression, graph algorithms, and machine learning. Their performance is highly dependent on the volume, the velocity, and the structure of the data. They are used in many different domains (from small, embedded devices, to large-scale, high-performance computing systems) but in all cases they need to provide answers with very low latency to enable real-time decision making and autonomy. Coarse-grained reconfigurable arrays (CGRAs), i.e., architectures composed of functional units able to perform complex operations interconnected through a network-on-chip and configure the datapath to map complex kernels, are a promising platform to accelerate these applications thanks to their adaptability. They provide higher flexibility than application-specific integrated circuits (ASICs) while offering increased energy efficiency and faster reconfiguration speed with respect to field-programmable gate arrays (FPGAs). However, designing and specializing CGRAs requires significant efforts. The inherent flexibility of these devices makes the application mapping process equally important to the hardware design generation. To obtain efficient systems, approaches that simultaneously considers software and hardware optimizations are necessary. In this paper, we discuss the Software Defined Architectures for Data Analytics (SO(DA)²) toolchain, an end-to-end hardware/software codesign framework to generate custom reconfigurable architectures for data analytics applications. (SO(DA)²) is composed of a high-level compiler (SODA-OPT) and a hardware generator (OpenCGRA) and can automatically explore and generate optimal CGRA designs starting from high-level programming frameworks. SO(DA)² considers partial dynamic reconfiguration as key element of the system design. We discuss the various elements of the framework and demonstrate the flow on the case study of a partial dynamic reconfigurable CGRA design for data streaming applications.

Cite as

Antonino Tumeo, Nicolas Bohm Agostini, Serena Curzel, Ankur Limaye, Cheng Tan, Vinay Amatya, Marco Minutoli, Vito Giovanni Castellana, Ang Li, and Joseph Manzano. SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures (Invited Talk). In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 1:1-1:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tumeo_et_al:OASIcs.PARMA-DITAM.2022.1,
  author =	{Tumeo, Antonino and Agostini, Nicolas Bohm and Curzel, Serena and Limaye, Ankur and Tan, Cheng and Amatya, Vinay and Minutoli, Marco and Castellana, Vito Giovanni and Li, Ang and Manzano, Joseph},
  title =	{{SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{1:1--1:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.1},
  URN =		{urn:nbn:de:0030-drops-161175},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.1},
  annote =	{Keywords: Reconfigurable architectures, data analytics}
}
Document
Invited Talk
Just-In-Time Composition of Reconfigurable Overlays (Invited Talk)

Authors: Rafael Zamacola, Andrés Otero, Alfonso Rodríguez, and Eduardo de la Torre


Abstract
This paper describes a framework supporting the automatic composition of reconfigurable overlays laid on top of an FPGA to offload computing-intensive sections of a given application, from an embedded processor to a loosely coupled reconfigurable accelerator. Overlays provide an abstraction layer acting as an intermediate fabric between users' applications and the FPGA fabric. Among the existing flavors, the overlay template proposed in this work is based on a coarse-grain reconfigurable architecture featuring word-level operators, reducing long place-and-route times associated with FPGA designs. The proposed overlays are composed at run-time using a tile-based approach, in which pre-synthesized processing elements are stitched together following a 2D grid pattern and using dynamic and partial reconfiguration. The proposed reconfigurable architecture is accompanied by an automated toolchain that, relying on an LLVM intermediate representation, automatically converts the source code to a data-flow graph that is afterward mapped onto the overlay. A mapping example is provided in this paper to show the possibilities enabled by the framework, including loop mapping and loop unrolling support, features originally described in this work.

Cite as

Rafael Zamacola, Andrés Otero, Alfonso Rodríguez, and Eduardo de la Torre. Just-In-Time Composition of Reconfigurable Overlays (Invited Talk). In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{zamacola_et_al:OASIcs.PARMA-DITAM.2022.2,
  author =	{Zamacola, Rafael and Otero, Andr\'{e}s and Rodr{\'\i}guez, Alfonso and de la Torre, Eduardo},
  title =	{{Just-In-Time Composition of Reconfigurable Overlays}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{2:1--2:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.2},
  URN =		{urn:nbn:de:0030-drops-161186},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.2},
  annote =	{Keywords: FPGA, Dynamic Partial Reconfiguration, Overlay, LLVM, Compilation}
}
Document
COLA-Gen: Active Learning Techniques for Automatic Code Generation of Benchmarks

Authors: Maksim Berezov, Corinne Ancourt, Justyna Zawalska, and Maryna Savchenko


Abstract
Benchmarking is crucial in code optimization. It is required to have a set of programs that we consider representative to validate optimization techniques or evaluate predictive performance models. However, there is a shortage of available benchmarks for code optimization, more pronounced when using machine learning techniques. The problem lies in the number of programs for testing because these techniques are sensitive to the quality and quantity of data used for training. Our work aims to address these limitations. We present a methodology to efficiently generate benchmarks for the code optimization domain. It includes an automatic code generator, an associated DSL handling, the high-level specification of the desired code, and a smart strategy for extending the benchmark as needed. The strategy is based on Active Learning techniques and helps to generate the most representative data for our benchmark. We observed that Machine Learning models trained on our benchmark produce better quality predictions and converge faster. The optimization based on the Active Learning method achieved up to 15% more speed-up than the passive learning method using the same amount of data.

Cite as

Maksim Berezov, Corinne Ancourt, Justyna Zawalska, and Maryna Savchenko. COLA-Gen: Active Learning Techniques for Automatic Code Generation of Benchmarks. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{berezov_et_al:OASIcs.PARMA-DITAM.2022.3,
  author =	{Berezov, Maksim and Ancourt, Corinne and Zawalska, Justyna and Savchenko, Maryna},
  title =	{{COLA-Gen: Active Learning Techniques for Automatic Code Generation of Benchmarks}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{3:1--3:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.3},
  URN =		{urn:nbn:de:0030-drops-161193},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.3},
  annote =	{Keywords: Benchmarking, Code Optimization, Active Learning, DSL, Synthetic code generation, Machine Learning}
}
Document
Energy-Aware HEVC Software Decoding On Mobile Heterogeneous Multi-Cores Architectures

Authors: Mohammed Bey Ahmed Khernache, Jalil Boukhobza, Yahia Benmoussa, and Daniel Menard


Abstract
Video content is becoming increasingly omnipresent on mobile platforms thanks to advances in mobile heterogeneous architectures. These platforms typically include limited rechargeable batteries which do not improve as fast as video content. Most state-of-the-art studies proposed solutions based on parallelism to exploit the GPP heterogeneity and DVFS to scale up/down the GPP frequency based on the video workload. However, some studies assume to have information about the workload before to start decoding. Others do not exploit the asymmetry character of recent mobile architectures. To address these two challenges, we propose a solution based on classification and frequency scaling. First, a model to classify frames based on their type and size is built during design-time. Second, this model is applied for each frame to decide which GPP cores will decode it. Third, the frequency of the chosen GPP cores is dynamically adjusted based on the output buffer size. Experiments on real-world mobile platforms show that the proposed solution can save more than 20% of energy (mJ/Frame) compared to the Ondemand Linux governor with less than 5% of miss-rate. Moreover, it needs less than one second of decoding to enter the stable state and the overhead represents less than 1% of the frame decoding time.

Cite as

Mohammed Bey Ahmed Khernache, Jalil Boukhobza, Yahia Benmoussa, and Daniel Menard. Energy-Aware HEVC Software Decoding On Mobile Heterogeneous Multi-Cores Architectures. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{beyahmedkhernache_et_al:OASIcs.PARMA-DITAM.2022.4,
  author =	{Bey Ahmed Khernache, Mohammed and Boukhobza, Jalil and Benmoussa, Yahia and Menard, Daniel},
  title =	{{Energy-Aware HEVC Software Decoding On Mobile Heterogeneous Multi-Cores Architectures}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{4:1--4:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.4},
  URN =		{urn:nbn:de:0030-drops-161206},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.4},
  annote =	{Keywords: energy consumption, mobile platform, heterogeneous architecture, software video decoding, hardware video decoding, HEVC}
}
Document
Precision Tuning in Parallel Applications

Authors: Gabriele Magnani, Lev Denisov, Daniele Cattaneo, and Giovanni Agosta


Abstract
Nowadays, parallel applications are used every day in high performance computing, scientific computing and also in everyday tasks due to the pervasiveness of multi-core architectures. However, several implementation challenges have so far stifled the integration of parallel applications and automatic precision tuning. First of all, tuning a parallel application introduces difficulties in the detection of the region of code that must be affected by the optimization. Moreover, additional challenges arise in handling shared variables and accumulators. In this work we address such challenges by introducing OpenMP parallel programming support to the TAFFO precision tuning framework. With our approach we achieve speedups up to 750% with respect to the same parallel application without precision tuning.

Cite as

Gabriele Magnani, Lev Denisov, Daniele Cattaneo, and Giovanni Agosta. Precision Tuning in Parallel Applications. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 5:1-5:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{magnani_et_al:OASIcs.PARMA-DITAM.2022.5,
  author =	{Magnani, Gabriele and Denisov, Lev and Cattaneo, Daniele and Agosta, Giovanni},
  title =	{{Precision Tuning in Parallel Applications}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{5:1--5:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.5},
  URN =		{urn:nbn:de:0030-drops-161210},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.5},
  annote =	{Keywords: Compilers, Parallel Programming, Precision Tuning}
}
Document
Multithread Accelerators on FPGAs: A Dataflow-Based Approach

Authors: Francesco Ratto, Stefano Esposito, Carlo Sau, Luigi Raffo, and Francesca Palumbo


Abstract
Multithreading is a well-known technique for general-purpose systems to deliver a substantial performance gain, raising resource efficiency by exploiting underutilization periods. With the increase of specialized hardware, resource efficiency became fundamental to master the introduced overhead of such kind of devices. In this work, we propose a model-based approach for designing specialized multithread hardware accelerators. This novel approach exploits dataflow models of applications and tagged tokens to let the resulting hardware support concurrent threads without the need to replicate the whole accelerator. Assessment is carried out over different versions of an accelerator for a compute-intensive step of modern video coding algorithms, under several feeding configurations. Results highlight that the proposed multithread accelerators achieve a valuable tradeoff: saving computational resources with respect to replicated parallel single-thread accelerators, while guaranteeing shorter waiting, response, and elaboration time than a unique single-thread accelerator multiplexed in time.

Cite as

Francesco Ratto, Stefano Esposito, Carlo Sau, Luigi Raffo, and Francesca Palumbo. Multithread Accelerators on FPGAs: A Dataflow-Based Approach. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ratto_et_al:OASIcs.PARMA-DITAM.2022.6,
  author =	{Ratto, Francesco and Esposito, Stefano and Sau, Carlo and Raffo, Luigi and Palumbo, Francesca},
  title =	{{Multithread Accelerators on FPGAs: A Dataflow-Based Approach}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{6:1--6:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.6},
  URN =		{urn:nbn:de:0030-drops-161225},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.6},
  annote =	{Keywords: multithreading, dataflow, hardware acceleration, heterogeneous systems, tagged dataflow}
}
Document
Efficient Memory Management for Modelica Simulations

Authors: Michele Scuttari, Nicola Camillucci, Daniele Cattaneo, Federico Terraneo, and Giovanni Agosta


Abstract
The ever increasing usage of simulations in order to produce digital twins of physical systems led to the creation of specialized equation-based modeling languages such as Modelica. However, compilers of such languages often generate code that exploits the garbage collection memory management paradigm, which introduces significant runtime overhead. In this paper we explain how to improve the memory management approach of the automatically generated simulation code. This is achieved by addressing two different aspects. One regards the reduction of the heap memory usage, which is obtained by modifying functions whose resulting arrays could instead be allocated on the stack by the caller. The other aspect regards the possibility of avoiding garbage collection altogether by performing all memory lifetime tracking statically. We implement our approach in a prototype Modelica compiler, achieving an improvement of the memory management overhead of over 10 times compared to a garbage collected solution, and an improvement of 56 times compared to the production-grade compiler OpenModelica.

Cite as

Michele Scuttari, Nicola Camillucci, Daniele Cattaneo, Federico Terraneo, and Giovanni Agosta. Efficient Memory Management for Modelica Simulations. In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{scuttari_et_al:OASIcs.PARMA-DITAM.2022.7,
  author =	{Scuttari, Michele and Camillucci, Nicola and Cattaneo, Daniele and Terraneo, Federico and Agosta, Giovanni},
  title =	{{Efficient Memory Management for Modelica Simulations}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{7:1--7:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.7},
  URN =		{urn:nbn:de:0030-drops-161237},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.7},
  annote =	{Keywords: Modelica, modeling \& simulation, memory management, garbage collection}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail