Personal Food Model

Authors: Ali Rostami, Vaibhav Pandey, Nitish Nag, Vesper Wang, Ramesh Jain

Proceedings of the 28th ACM International Conference on Multimedia (MM '20), October 12--16, 2020, Seattle, WA, USA

Abstract: Food is central to life. Food provides us with energy and foundational building blocks for our body and is also a major source of joy and new experiences. A significant part of the overall economy is related to food. Food science, distribution, processing, and consumption have been addressed by different communities using silos of computational approaches. In this paper, we adopt a person-centric multimedia and multimodal perspective on food computing and show how multimedia and food computing are synergistic and complementary. Enjoying food is a truly multimedia experience involving sight, taste, smell, and even sound, that can be captured using a multimedia food logger. The biological response to food can be captured using multimodal data streams using available wearable devices. Central to this approach is the Personal Food Model. Personal Food Model is the digitized representation of the food-related characteristics of an individual. It is designed to be used in food recommendation systems to provide eating-related recommendations that improve the user's quality of life. To model the food-related characteristics of each person, it is essential to capture their food-related enjoyment using a Preferential Personal Food Model and their biological response to food using their Biological Personal Food Model. Inspired by the power of 3-dimensional color models for visual processing, we introduce a 6-dimensional taste-space for capturing culinary characteristics as well as personal preferences. We use event mining approaches to relate food with other life and biological events to build a predictive model that could also be used effectively in emerging food recommendation systems.

Submitted to arXiv on 28 Aug. 2020

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.