TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 22272

Eat-IT: Interactive Food

( 03. Jul – 08. Jul, 2022 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/22272

Organisatoren

Kontakt


Summary

In July 2022, 21 researchers and academics from Europe, Australasia and the USA gathered for a week to discuss the future of the coming together of food and information technology (IT), shortly called eat-IT.

Eating is a basic human need, and there is a growing interest in the field of Human- Computer Interaction (HCI) in designing new interactive food experiences, for example, to promote healthier food practices (e.g., [4, 3, 12, 17]), to make eating a more enjoyable experience (e.g., [18, 19, 21, 10, 6]), and to design multisensory eating experiences (e.g., [13, 15, 16]). Theoretical work around the design of interactive food also emerged, for example, Grimes and Harper [5] proposed that a new view on human-food interactions (HFI) is required and introduced the concept of “celebratory technology” that emphasizes the positive aspects of eating in everyday life. Computational technology can make a significant contribution towards such celebratory technology, for example, Khot et al. [7] presented a system called TastyBeats that instead of presenting physical activity data on a screen, it offers users personalised sports drinks where the quantity and flavour is based on the amount of exercise a user has done. In a similar vein, EdiPulse [8] was introduced as a system that creates activity treats (chocolate creations) using a food printer. The shape and quality of the prints were based on the person’s physical activities on that day, allowing for personal and shared reflections through consuming chocolate instead of looking at graphs on a screen. Computer science and in particular the information visualization community can, therefore, regard food (and drinks) as a medium to make data more approachable for people, communicating complex information in an easy-to-digest format [10]. Parametric design approaches have also influenced the way food is produced. For example, Wang et al. [20] developed the concept of shape-changing and programmable food that transforms during the cooking process. Through a material-based interaction design approach, the authors demonstrated the transformation of 2D into 3D food (i.e. pasta). They proposed these transformations for new dining experiences that can surprise users, but this can also be used for outer space, where food comes as a flat design and only transforms into a 3D form through the cooking process.

Furthermore, technological advancements in acoustic levitation have led to the design of taste-delivery technology that transports and manipulates food in mid-air [18], allowing for novel interactions between diners and food that is of interest to HCI researchers as it allows to study augmented food experiences without the use of cutlery. This work further extends taste stimulation towards a multisensory experience of levitating food due to the integration of smell, directional sound, lights, and touch [19]. Furthermore, robots are now in use to serve ice cream to the general public [2]. Lastly, laser-cutters have already been used to embed data into cookies through the engraving of QR codes [14]. Taken together, these examples suggest that computing technology can play a major role in the way we engage with food, in particular, there is a realization that technology can both facilitate instrumental benefits in regards to food (such as improved health through better food choices) as well as experiential benefits (such as enriched social experiences). In summary, computational technology has the potential to influence how people experience eating.

However, this notion of what we call “interactive food” also raises significant concerns. Will computing technology distract from the pure pleasures of eating? Will people accept meals that are optimized through data-driven approaches? Will people enjoy food that is served by robots? Will people understand and act on data that is embedded in food? Questions such as these and, of course, their answers are important for the future of the field, and the seminar tried to investigate them.

The seminar was based on the belief that computer scientists, designers, developers, researchers, chefs, restaurateurs, producers, canteen managers, etc. can learn from each other to positively influence the future of interactive food. Working together allows for the identification of new opportunities the field offers but will also highlight the challenges that the community will need to overcome. In particular, it is still unknown what theory to use to design such computational systems in which the interaction is very multisensorial, contrasting the traditional mouse, keyboard and screen interactions. Furthermore, it is unclear how interacting with food is benefiting from, and also challenged by, our mostly three-times-a-day engagement with it (breakfast, lunch and dinner), again different to our interactions with mobile phones that occur at any time.

Furthermore, how do we create and evaluate interactions with computationally-augmented food that needs preparation time, again very different from our usually immediate interactions with interactive technology? What interaction design theory can guide us in answering these questions in order to extend computer science also to include food interactions? If such theoretical questions could be answered, as a flow-on effect, more insights could be generated on how to evaluate the success of such interactions. The result will be not only more engaging eating experiences, but also the potential to influence when and how and what people eat. This can have major health implications, possibly address major issues such as overeating that results in obesity and then a higher risk of diabetes, heart disease, stroke, bone and joint diseases, sleep apnea, cancer, and overall reduced life expectancy and quality of life [1]. Interrogating such topics is important, as otherwise industry advances will drive the field forward that can easily dismiss or oversee negative consequences when it comes to combining computational technology and food. It is imperative to get ahead of the curve and steer the field in the right direction through an interdisciplinary approach involving a set of experts brought together through the seminar.

Although there is an increasing number of systems emerging, there is limited knowledge about how to design them in a structured way, evaluate their effectiveness and associated user experiences as well as how to derive theory from them to confirm, extend or reject an existing theory. The seminar, therefore, examined these in order to drive a more positive future around interactive food.

Understanding the role of computational technology in this area is a way to make a positive contribution and guide the field in a positive way. There are a couple of areas of imminent importance, and we highlight these here:

  • With advances in ubiquitous sensing systems, such as wearables, personal data becomes available in abundance. Such personal data can be embedded into food in order to either communicate it to users in engaging ways or as a way to personalize the food, such as when presenting meals that contain only those calories previously expended. If users understand such data visualizations or want to eat size-controlled portions based on personal data is an ongoing question. Issues of privacy and sharing of such data with chefs and kitchens are also open questions to be investigated.
  • With advances in persuasive technology (as already utilized in the form of mobile apps that aim to persuade people to eat more healthy food), new opportunities arise to combine multiple sensor data from an IoT infrastructure to develop more persuasive systems. How people adhere to such approaches and change their behaviour to the better is still an underdeveloped area that needs to be investigated.
  • Big data already used individually by producers, manufacturers and kitchens will increasingly converge, allowing to monitor and influence the supply chain from the farm to the diner's plate. This can help to optimize the sourcing of local produce, reducing the environmental impact through reduced transport distances and a reduction of food waste. How to make sense of such data and use machine learning and other AI advances to utilize this data, so it makes a difference to every part of this supply chain, is an important area for future work.
  • Advanced sensor systems can now sense eating actions, such as through jaw movement [9]. By using machine learning, we can now gain an increased understanding of how people eat. This can inform the design of interactive systems that help people make better future eating choices. For example, people might stop overeating if a system could tell them that their stomach will produce a "full" feeling earlier than the 20 minutes it usually takes.
  • With the advances of mixed-reality systems like VR headsets and augmented reality on mobile phones, new opportunities arise on how to augment food. For example, prior work has shown that people who perceive cookies through the use of augmented reality to be bigger than they actually are will change how much they eat [11]. There is therefore a significant opportunity to employ mixed-reality to change and offer new opportunities on what and how we eat. How to draw from and incorporate multimodal interaction design theory already established in computer science is an open question for the community to investigate.
  • Robotic systems allow to prepare and serve food in novel and interesting ways, for example, robotic arms can already be purchased to be installed in personal kitchens and robots already serve ice cream in public ice cream shops. How to design such interactions so that they are engaging and safe, while still considering the joy and benefit from being engaged in cooking activities, is an interesting area for future work.
  • Multisensory integration research has allowed to better understand how humans integrate sensory information to produce a unitary experience of the external world. It is reasonable to expect that technology will keep advancing and sensory delivery will become more accurate. In addition, our understanding of the human senses and perception will become more precise through large scale data from HCI and integration research. As such, there is the potential to systematize our definition of multisensory experiences, through adaptive, computational design. This is exciting but at the same time carries big questions on the implications of multisensory experiences as well as our responsibility when developing them.

The seminar began with talks by all attendees, in which they presented their work in the area and what they thought the biggest challenges are from their perspective the field is facing. After the presentations concluded, no more slides were used for the remainder of the week, with all activities being conducted either as a townhall meeting or in breakout groups. This was supplemented by optional morning and evening activities, such as jogging, beach volleyball, foosball, or cycling.

The structure of the seminar was based around theory, design and their intersection.

References

  1. The world health report 2010.
  2. Home – niska retail retail, Oct 2021.
  3. Rob Comber, Eva Ganglbauer, Jaz Hee-jeong Choi, Jettie Hoonhout, Yvonne Rogers, Kenton O’hara, and Julie Maitland. Food and interaction design: designing for food in everyday life. In CHI’12 Extended Abstracts on Human Factors in Computing Systems, pages 2767–2770. 2012.
  4. Rob Comber, Jaz Hee jeong Choi, Jettie Hoonhout, and Kenton O’Hara. Designing for human – food interaction: An introduction to the special issue on “food and interaction design”. International Journal of Human-Computer Studies, 72(2):181–184, 2014.
  5. Andrea Grimes and Richard Harper. Celebratory technology: new directions for food research in hci. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 467–476, 2008.
  6. Rohit Ashok Khot, Deepti Aggarwal, Ryan Pennings, Larissa Hjorth, and Florian ‘Floyd’ Mueller. Edipulse: investigating a playful approach to self-monitoring through 3d printed chocolate treats. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 6593–6607, 2017.
  7. Rohit Ashok Khot, Jeewon Lee, Deepti Aggarwal, Larissa Hjorth, and Florian ‘Floyd’ Mueller. Tastybeats: Designing palatable representations of physical activity. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 2933–2942, 2015.
  8. Rohit Ashok Khot, Ryan Pennings, and Florian ‘Floyd’ Mueller. Edipulse: supporting physical activity with chocolate printed messages. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pages 1391–1396, 2015.
  9. Naoya Koizumi, Hidekazu Tanaka, Yuji Uema, and Masahiko Inami. Chewing jockey: augmented food texture by using sound based on the cross-modal effect. In Proceedings of the 8th international conference on advances in computer entertainment technology, pages 1–4, 2011.
  10. Florian ‘Floyd’ Mueller, Tim Dwyer, Sarah Goodwin, Kim Marriott, Jialin Deng, Han D. Phan, Jionghao Lin, Kun-Ting Chen, Yan Wang, and Rohit Ashok Khot. Data as delight: Eating data. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2021.
  11. Takuji Narumi, Yuki Ban, Takashi Kajinami, Tomohiro Tanikawa, and Michitaka Hirose. Augmented perception of satiety: controlling food consumption by changing apparent size of food with augmented reality. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 109–118, 2012.
  12. Marianna Obrist, Rob Comber, Sriram Subramanian, Betina Piqueras-Fiszman, Carlos Velasco, and Charles Spence. Temporal, affective, and embodied characteristics of taste experiences: A framework for design. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 2853–2862, 2014.
  13. Marianna Obrist, Yunwen Tu, Lining Yao, and Carlos Velasco. Space food experiences: Designing passenger’s eating experiences for future space travel scenarios. Frontiers in Computer Science, 1, 2019.
  14. Johannes Schoning, Yvonne Rogers, and Antonio Kruger. Digitally enhanced food. IEEE pervasive computing, 11(3):4–6, 2012.
  15. Carlos Velasco and Marianna Obrist. Multisensory experiences: A primer. Frontiers in Computer Science, 3, 2021.
  16. Carlos Velasco, Marianna Obrist, Gijs Huisman, Anton Nijholt, Charles Spence, Kosuke Motoki, and Takuji Narumi. Editorial: Perspectives on multisensory human-food interaction. Frontiers in Computer Science, 3, 2021.
  17. Carlos Velasco, Marianna Obrist, Olivia Petit, and Charles Spence. Multisensory technology for flavor augmentation: a mini review. Frontiers in psychology, 9:26, 2018.
  18. Chi Thanh Vi, Asier Marzo, Damien Ablart, Gianluca Memoli, Sriram Subramanian, Bruce Drinkwater, and Marianna Obrist. Tastyfloats: A contactless food delivery system. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, pages 161–170, 2017.
  19. Chi Thanh Vi, Asier Marzo, Gianluca Memoli, Emanuela Maggioni, Damien Ablart, Martin Yeomans, and Marianna Obrist. Levisense: A platform for the multisensory integration in levitating food and insights into its effect on flavour perception. International Journal of Human-Computer Studies, 139:102428, 2020.
  20. Wen Wang, Lining Yao, Teng Zhang, Chin-Yi Cheng, Daniel Levine, and Hiroshi Ishii. Transformative appetite: shape-changing food transforms from 2d to 3d by water interaction through cooking. In Proceedings of the 2017 CHI conference on human factors in computing systems, pages 6123–6132, 2017.
  21. Yan Wang, Zhuying Li, Robert Jarvis, Rohit Ashok Khot, and Florian ‘Floyd’ Mueller. iscream! towards the design of playful gustosonic experiences with ice cream. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–4, 2019.
Copyright Florian `Floyd' Mueller, Marianna Obrist, Soh Kim, and Masahiko Inami

Motivation

Eating is a basic human need while technology is transforming the way we cook and eat food. For example, see the internet-connected Thermomix cooking appliance, desserts using virtual reality headsets, projection mapping on dinner plates and 3D-printed food in Michelin-star restaurants.

Especially within the field of Human-Computer Interaction (HCI), there is a growing interest in understanding the design of technology to support the eating experience. There is a realization that technology can both be instrumentally beneficial (e.g., improving health through better food choices) as well as experientially beneficial (e.g., enriching eating experiences). Computational technology can make a significant contribution here, as it allows to, for example, present digital data through food (drawing from visualization techniques and fabrication advances such as 3D-food printing); facilitate technology-augmented behavior change to promote healthier eating choices; employ big data across suppliers to help choose more sustainable produce (drawing on IoT kitchen appliances); use machine learning to predictively model eating behavior; employ mixed-reality to facilitate novel eating experiences; and turn eating into a spectacle through robots that support cooking and serving actions.

The aim of this Dagstuhl Seminar called "Eat-IT" is to discuss these opportunities and challenges by bringing experts and stakeholders with different backgrounds from academia and industry together to formulate actionable strategies on how interactive food can benefit from computational technology yet not distract from the eating experience itself. With this seminar, we want to enable a healthy and inclusive debate on the interwoven future of food and computational technology.

Copyright Masahiko Inami, Sohyeong Kim, Florian 'Floyd' Mueller, and Marianna Obrist

Teilnehmer
Vor Ort
  • Ferran Altarriba Bertran (ERAM University School - Salt, ES)
  • Sahej Claire (Stanford University, US)
  • Christopher Dawes (University College London, GB)
  • Jialin Deng (Monash University - Clayton, AU) [dblp]
  • Masahiko Inami (University of Tokyo, JP) [dblp]
  • Kyung seo Jung (Stanford University, US)
  • Sohyeong Kim (Stanford University, US) [dblp]
  • Nadejda Krasteva (Sony - Stuttgart, DE)
  • Kai Kunze (Keio University - Yokohama, JP) [dblp]
  • Neharika Makam (Stanford University, US)
  • Florian 'Floyd' Mueller (Monash University - Clayton, AU) [dblp]
  • Marianna Obrist (University College London, GB) [dblp]
  • Harald Reiterer (Universität Konstanz, DE) [dblp]
  • Matti Schwalk (Sony - Stuttgart, DE)
  • Jürgen Steimle (Universität des Saarlandes, DE) [dblp]
Remote:
  • Eleonora Ceccaldi (University of Genova, IT)
  • Simon Henning (Sony Design - Lund, SE)
  • Rohit Khot (RMIT University - Melbourne, AU) [dblp]
  • Maurizio Mancini (Sapienza University of Rome, IT) [dblp]
  • Patrizia Marti (Università di Siena, IT) [dblp]
  • Nandini Pasumarthy (RMIT University - Melbourne, AU)
  • Yan Wang (Monash University - Clayton, AU) [dblp]

Klassifikation
  • Human-Computer Interaction

Schlagworte
  • Interaction Design
  • Multisensory interaction
  • Nutrition
  • Health
  • Human-Food Interaction