Identifying Key Enablers in Edge Intelligence
( 22. Aug – 25. Aug, 2021 )
- Aaron Ding (TU Delft, NL)
- Ella Peltonen (University of Oulu, FI)
- Sasu Tarkoma (University of Helsinki, FI)
- Lars Wolf (TU Braunschweig, DE)
- Shida Kunz (für wissenschaftliche Fragen)
- Jutka Gasiorowski (für administrative Fragen)
- Roadmap for Edge AI : A Dagstuhl Perspective - Ding, Aaron Yi; Peltonen, Ella; Meuser, Tobias; Wolf, Lars; Tarkoma, Sasu; Schulzrinne, Henning; Schulte, Stefan; Ott, Jörg; Liyanage, Madhusanka; Kranzlmüller, Dieter; Dustdar, Schahram; Becker, Christian; Maghsudi, Setareh; Mohan, Nitinder; Rellermeyer, Jan S.; Solmaz, Gürkan; Varghese, Blesson; Aral, Atakan; Hiessl, Thomas - Cornell University : arXiv.org, 2021. - 6 pp..
- Roadmap for edge AI : a Dagstuhl perspective : article - Ding, Aaron Yi; Peltonen, Ella; Meuser, Tobias; Aral, Atakan; Maghsudi, Setareh; Wolf, Lars Christian; Varghese, Blesson; Tarkoma, Sasu; Solmaz, Gürkan; Schulzrinne, Henning; Schulte, Stefan; Rellermeyer, Jan S.; Ott, Jörg; Mohan, Nitinder; Liyanage, Madhusanka; Kranzlmüller, Dieter; Hiessl, Thomas; Dustdar, Schahram; Becker, Christoph - New York : ACM, 2022. - pp 28 - 33 - (ACM SIGCOMM Computer Communication Review ; 52. 2022, 1).
The Dagstuhl Seminar “Identifying Key Enablers in Edge Intelligence” brings together experts from distributed computing systems, edge infrastructures, applied artificial intelligence, machine learning, and related fields. Edge computing, a key part of the 5G networks and beyond, promises to decentralize cloud applications while providing more bandwidth and reducing latencies. The promises are delivered by moving application-specific computations between the cloud, the data producing devices, and the network infrastructure components at the edges of wireless and fixed networks. However, the current AI/ML methods assume computations are conducted in a powerful computational infrastructure, such as a homogeneous cloud with ample computing and data storage resources available. In this seminar, we discuss and develop presumptions for a comprehensive view of AI methods and capabilities in the context of edge computing, and provide a roadmap to bring together enablers and key aspects for edge computing and applied AI/ML fields.
Edge computing, a key part of the upcoming 5G mobile networks and future 6G technologies, promises to decentralize cloud applications while providing more bandwidth and reducing latencies. The promises are delivered by moving application-specific computations between the cloud, the data-producing devices, and the network infrastructure components at the edges of wireless and fixed networks. The previous works have shown that edge computing devices are capable of executing computing tasks with high energy efficiency, and when combined with comparable computing power to server computers.
In stark contrast to the current edge-computing development, current artificial intelligence (AI) and in particular machine-learning (ML) methods assume computations are conducted in a powerful computational infrastructure, such as a homogeneous cloud with ample computational and data storage resources available. This model requires transmitting data from end-user devices to the cloud, requiring significant bandwidth and suffering from latency. Bringing computation close to the end-user devices would be essential for reducing latency and ensuring real-time response for applications and services. Currently, however, these benefits cannot be achieved as the perspective of "edge for AI", or even "communication for AI2, has been understudied. Indeed, previous studies address AI only limitedly in different perspectives of the Internet of Things, edge computing, and networks.
Clear benefits can be identified from the interplay of ML/AI and edge computing. We divide this interplay into edge computing for AI and AI for edge computing. Distributed AI functionality can further be divided into edge computing for communication, platform control, security, privacy, and application or service-specific aspects. Edge computing for AI centres on the challenge of adapting the current centralized ML and autonomous decision-making algorithms to the intermittent connectivity and the distributed nature of edge computing. AI for edge computing, on the other hand, concentrates on using AI methods to improve the edge applications or the functionalities provided by the edge computing platform by enhancing connectivity, network orchestration, edge platform management, privacy or security, or providing autonomy and personalized intelligence on application level.
Previous studies address accommodating AI methods for different perspectives of IoT, edge computing and networks. However, there is still a need to understand the holistic view of AI methods and capabilities in the context of edge computing, comprising for example predictive data analysis, machine learning, reasoning, and autonomous agents with learning and cognitive capabilities. Further, the edge environment with its opportunistic nature, intermittent connectivity, and interplay of numerous stakeholders present a unique environment for deploying such applications based on computations units with different degrees of intelligence capabilities.
The AI methods used in edge computing can be further divided into learning and decision making. Learning refers to building, maintaining and making predictions with ML models, especially neural networks. Decision making is the business logic, that is, the process of acting upon the predictions. This is the domain of decision theory, control theory and game theory, whose solutions and equilibrium are now often estimated with data by reinforcement learning methods.
Currently, AI's cloud-centric architecture requires transmitting raw data from the end-user devices to the cloud, introducing latencies, endangering privacy and consuming significant data transmission resources. The next step, currently under active research, is distributed or federated AI, which builds and maintains a central model in the cloud or on the edge but allows user devices to update the model and use it locally for predictions. We envision a fully decentralized AI which flattens the distributed hierarchy, with the joint model built and maintained by devices, edge nodes and cloud nodes with equal responsibility.
The present challenges for AI in edge computing converge on 1) finding novel neural network architectures and their topological splits, with the associated training and inference algorithms with fast and reliable accuracy and 2) distributing and decentralized model building and sharing into the edge, by allowing local, fast-to-build personalized models and global, collaborative models, and information sharing. Finally, the novel methods need to be 3) integrated with key algorithmic solutions to be utilised in edge-native AI applications. The ground-breaking objectives and novel concepts edge-native artificial intelligence brings are:
- Edge-native AI can be used for obtaining higher quality data from massive Internet of Things, Web of Things, and other edge networks by filtering out large volumes of noise, context labelling, dynamic sampling, data cleaning, etc. High-quality data can thus be used to feed both edge inferencing and cloud-based data analysis systems, for example, training large-scale machine learning models.
- Edge computing provides low latency that is crucial especially for real-time applications, such as anything related to driving and smart mobility. AI applications on the edge and thus closer to the end-user will not only fasten existing applications but also provide opportunities for novel and completely new solutions.
- Edge-based computing provides data privacy when users are involved, and no need to share data to the cloud services but only the locally learned model.
- With edge-computing implemented for AI/ML model building, personalisation of such models can be done in local environments without unnecessary transmission overhead (when only local data is anyway considered for model building). Global models built in the cloud environment can be used to support these local models whenever a collaborative, large or more general model is requested.
- Edge-native AI/ML tasks provide mobility of the computation and cloudlet-like processing in the edge. In comparison to cloudlets, edge computing provides more flexibility and dynamic operations for load balancing, task management, distribution of the models, etc.
- Light-weight computation on the edge devices and local environments can enable energy savings.
- Ethical data management: edge-native AI can be used to keep data ownership control closer to the user, e.g., when computation is managed and task distribution controlled from the user's own devices, and suitable security and privacy protection methods are in use.
- Christian Becker (Universität Mannheim, DE) [dblp]
- Schahram Dustdar (TU Wien, AT) [dblp]
- Janick Edinger (Universität Hamburg, DE) [dblp]
- Lauri Lovén (University of Oulu, FI) [dblp]
- Tri Nguyen (University of Oulu, FI) [dblp]
- Ella Peltonen (University of Oulu, FI) [dblp]
- Jan Rellermeyer (TU Delft, NL) [dblp]
- Martijn Warnier (TU Delft, NL) [dblp]
- Lars Wolf (TU Braunschweig, DE) [dblp]
- Atakan Aral (Universität Wien, AT) [dblp]
- Jari Arkko (Ericsson - Jorvas, FI) [dblp]
- Yiran Chen (Duke University - Durham, US) [dblp]
- Eyal De Lara (University of Toronto, CA) [dblp]
- Aaron Ding (TU Delft, NL) [dblp]
- Fred Douglis (Peraton Labs - Basking Bridge, US) [dblp]
- Diego Ferran (Telefónica Research - Barcelona, ES) [dblp]
- Thomas Hiessl (Siemens AG - Wien, AT) [dblp]
- Dewant Katare (TU Delft, NL) [dblp]
- Dieter Kranzlmüller (LMU München, DE) [dblp]
- Ling Liu (Georgia Institute of Technology - Atlanta, US) [dblp]
- Madhusanka Liyanage (University College Dublin, IE) [dblp]
- Ivan Lujic (TU Wien, AT) [dblp]
- Setareh Maghsudi (Universität Tübingen, DE) [dblp]
- Nitinder Mohan (TU München, DE) [dblp]
- Iqbal Mohomed (Samsung AI Research - Toronto, CA) [dblp]
- Roberto Morabito (Ericsson - Jorvas, FI) [dblp]
- Petteri Nurmi (University of Helsinki, FI) [dblp]
- Jörg Ott (TU München, DE) [dblp]
- Francesco Regazzoni (University of Amsterdam, NL & Università della Svizzera italiana, CH) [dblp]
- Olga Saukh (TU Graz, AT) [dblp]
- Stefan Schulte (TU Hamburg, DE) [dblp]
- Henning Schulzrinne (Columbia University - New York, US) [dblp]
- Maarten Sierhuis (Nissan Research Center - Sunnyvale, US) [dblp]
- Stephan Sigg (Aalto University, FI) [dblp]
- Pieter Simoens (Ghent University, BE) [dblp]
- Gürkan Solmaz (NEC Laboratories Europe - Heidelberg, DE) [dblp]
- Sasu Tarkoma (University of Helsinki, FI) [dblp]
- Wiebke Toussaint (TU Delft, NL) [dblp]
- Antero Vainio (University of Helsinki, FI)
- Marten Van Dijk (CWI - Amsterdam, NL) [dblp]
- Maarten van Steen (University of Twente, NL) [dblp]
- Blesson Varghese (Queen's University of Belfast, GB) [dblp]
- Shiqiang Wang (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
- Klaus Wehrle (RWTH Aachen, DE) [dblp]
- Michael Welzl (University of Oslo, NO) [dblp]
- Chenren Xu (Peking University, CN) [dblp]
- Dagstuhl-Seminar 23432: Edge-AI: Identifying Key Enablers in Edge Intelligence (2023-10-22 - 2023-10-25) (Details)
- Artificial Intelligence
- Distributed / Parallel / and Cluster Computing
- Networking and Internet Architecture
- Edge Computing
- Communication Networks
- Artificial Intelligence
- Intelligent Networking