Vision-Based Localization and Text Chunking of Nutrition Fact Tables on Android Smartphones

semanticscholar(2013)

Cited 1|Views0
No score
Abstract
Proactive nutrition management is considered by many nutritionists and dieticians as a key factor in reducing and controlling cancer, diabetes, and other illnesses related to or caused by mismanaged diets. As more and more individuals manage their daily activities with smartphones, smartphones have the potential to become proactive diet management tools. While there are many vision-based mobile applications to process barcodes, there is a relative dearth of vision-based applications for extracting other useful nutrition information items from product packages, e.g., nutrition facts, caloric contens, and ingredients. In this paper, we present a visionbased algorithm to localize aligned nutrition fact tables (NFTs) present on many grocery product packages and to segment them into text chunks. The algorithm is a front end to a cloud-based nutrition management system we are currently developing. The algorithm captures frames in video mode from the smartphone’s camera, localizes aligned NFTs via vertical and horizontal projections, and segments the NFTs into singleor multi-line text chunks. The algorithm is implemented on Android 2.3.6 and Android 4.2. Pilot NFT localization and text chunking experiments are presented and discussed. Keywords—computer vision; image processng; vision-based nutrition information extraction; nutrition management
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined