Picture-to-Amount (PITA): Predicting Relative Ingredient Amounts from Food Images

This is a joint work by Jiatong Li, Fangda Han, Ricardo Guerrero and Vladimir Pavlovic. The paper is accepted by the 25th International Conference on Pattern Recognition (ICPR 2020).

Abstract

Increased awareness of the impact of food consumption on health and lifestyle today has given rise to novel data-driven food analysis systems. Although these systems may recognize the ingredients, a detailed analysis of their amounts in the meal, which is paramount for estimating the correct nutrition, is usually ignored. In this paper, we study the novel and challenging problem of predicting the relative amount of each ingredient from a food image. We propose PITA, the Picture-to-Amount deep learning architecture to solve the problem. More specifically, we predict the ingredient amounts using a domain-driven Wasserstein loss from image-to-recipe cross-modal embeddings learned to align the two views of food data. Experiments on a dataset of recipes collected from the Internet show the model generates promising results and improves the baselines on this challenging task. A demo of our system and our data is available at: foodai.cs.rutgers.edu.

Full paper in https://arxiv.org/abs/2010.08727

Leave a Reply

Your email address will not be published.