Prediction of glycan content from real-world food diaries

SPECIFIC AIMS:

1) Test off-the-shelf computer vision algorithms on food images for single core foods
2) Generate benchmark data set of real-world mixed meal photos paired with food records
3) Engage with SCIO to pilot test whether NIR can distinguish a pair of visually similar foods
4) Develop glycan library for core foods
5) Build library of images for single core foods being analyzed for glycan content
6) Determine optimal set of mixed meals to analyze for glycan content

BACKGROUND:

The endpoint of the food system is the consumption of that food by humans to sustain human health. To determine the effect of diet on human health, nutrition scientists use controlled feeding studies, which are prohibitively expensive. An alternative approach is to use large observational cohorts with self-reported dietary intake, but current methods of dietary intake assessment are extremely burdensome for human participants and contain substantial biases and errors. New AI-assisted methods that enable participants to capture diet in realtime are needed to reduce these biases. Further, the glycan content of food—those portions of the diet that are accessible by microbes—is not yet known, hindering the ability to predict the effects of foods on the gut microbiome. The purpose of the first year of this proposal, therefore, is to develop benchmark data and food image/text processing towards the prediction of the glycan content in images of mixed meals from real-world food diaries.

SIGNIFICANCE AND IMPACT:

Nutrition researchers use antiquated methods to assess diet because validated continuous real-time dietary assessment does not yet exist. Furthermore, no application on the market today provides information on the glycans in foods that are primary carbon sources for our gut microbes. Solving this problem will radically alter nutrition research and accelerate our progress towards determining what people should eat to nourish the right gut microbes.

APPROACH WITH SUCCESS METRICS:

First we will curate a dataset of ‘core foods’ corresponding to simple (i.e. non-recipe) foods evaluated by the Lebrilla lab for glycan content (n > 200). For each core food, 100 images will be downloaded using Google image search. The resulting images will be QC’d to remove duplicates and images that are not relevant. We will manually select images from the Recipe1M dataset, which contains images from online recipes, to obtain a subset of images of mixed foods and images with multiple foods per image. The mixed food images will include foods with the core foods as ingredients (e.g. core food = ‘oatmeal’, mixed food = ‘oatmeal cookies’) in addition to other randomly selected mixed food images. Images will be annotated with ground truth bounding boxes via Mechanical Turk.

The core and mixed food image dataset will be used to develop a model for identifying core and mixed foods. We will use transfer learning to leverage existing pre-trained models. First, we will try faster-RCNN or YOLO models to classify whether food in an image is a core or mixed food, and each food will be cropped to the identified bounding area. Cropping to the area of the food will enable the input images to contain multiple food or plates (i.e. meals) instead of only accommodating close ups of a single food. Object detection and classification will be evaluated using intersection over union and accuracy. Next, we will try the Inceptionv3, ResNet-50, and VGG16 models to classify the specific labels of the core foods (e.g. apple, oatmeal, brown rice). The im2recipe model, which uses a pre-trained version of ResNet-50, will be used to identify the individual ingredients in the mixed foods. Performance of the core foods classifier and im2recipe models will be evaluated with accuracy. Since the model will be developed using images obtained from the internet, the final ensemble will be evaluated using the real-world images collected from human subjects.

To complement visual recognition, we will test near infrared (NIR) technology from SCIO tech. They have created a pocket-size micro spectrometer. We will test using visually similar foods (e.g. mashed cauliflower and mashed potatoes) to determine if the foods can be distinguished.

Because we cannot determine the glycan profile of an infinite number of mixed meals, we will use an optimization algorithm, OPEX to identify which mixed meals to analyze to create the data set needed to build a mixed meal glycan predictor.

Project Team

Collaboration

The projects are jointly conducted with partners through the AIFS Network:
https://aifs.ucdavis.edu/

USDA

D. Lemay

(PI)

C. Stephensen

(Collaborator)

UC Davis

E. Bonnel

(Collaborator)

M. Earles

(Collaborator)

B. German

(PI)

C. Lebrilla

(PI)

X. Liu

(Collaborator)

D. Mills

(PI)

J. Seigel

(Collaborator)

J. Smilowitz

(PI)

I. Tagkopolous

(PI)

UC Berkeley

E. Ligon

(Collaborator)

Pennington

C. Martin

(Collaborator)