โก Quick Summary
A recent study evaluated a new automated food recognition app among young adults, demonstrating that the app achieved an impressive 86% accuracy in identifying dishes, significantly outperforming a voice input alternative. This advancement in artificial intelligence technology shows promise for enhancing dietary intake reporting.
๐ Key Details
- ๐ Participants: 42 young adults aged 20-25 years
- ๐ฝ๏ธ Dishes tested: 17 typical lunch or dinner dishes
- โ๏ธ Technology: Automated Image Recognition (AIR) vs. Voice Input Reporting (VIR)
- ๐ Performance metrics: Accuracy, efficiency, and user perception
๐ Key Takeaways
- ๐ธ AIR app identified 86% of dishes correctly compared to 68% for the VIR app.
- โฑ๏ธ Time efficiency was significantly better in the AIR group (P<.001).
- ๐ง User perception of both apps was high, indicating good usability.
- ๐ Study conducted in authentic dining conditions, enhancing real-world applicability.
- ๐ก Potential for AI to improve dietary tracking and health monitoring.
- ๐ฑ Future research needed for broader applications and user experience assessments.
๐ Background
With the rise of artificial intelligence, there is a growing interest in its application for dietary assessment. Traditional methods of tracking food intake can be cumbersome and prone to errors. This study aimed to explore the effectiveness of an automated food recognition app in real-life dining scenarios, providing a more efficient and user-friendly alternative for dietary reporting.
๐๏ธ Study
Conducted at a university in Taiwan, this randomized controlled trial involved 42 young adults who were assigned to either the automatic image-based reporting (AIR) group or the voice input reporting (VIR) group. Participants used their smartphones to report their meals, with the AIR group relying primarily on image capture and the VIR group using voice commands to supplement their reports.
๐ Results
The results were striking: the AIR group achieved an 86% accuracy in dish identification, while the VIR group managed only 68%. Additionally, the AIR group completed their reporting significantly faster, with a P-value of less than 0.001 indicating strong statistical significance. Both apps were rated highly for usability, suggesting that users found them easy to learn and use.
๐ Impact and Implications
The findings from this study could have significant implications for dietary tracking and health monitoring. By integrating automated image recognition technology into mobile applications, we can enhance the accuracy and efficiency of food intake reporting. This could lead to better dietary management and health outcomes, particularly for individuals seeking to monitor their nutrition more effectively.
๐ฎ Conclusion
This study highlights the potential of AI-driven technologies in transforming dietary reporting. The AIR app’s superior performance in accuracy and efficiency suggests that such innovations could play a crucial role in future health applications. Continued research and development in this area are essential to fully realize the benefits of automated food recognition in everyday life.
๐ฌ Your comments
What are your thoughts on the use of AI for dietary tracking? Do you think automated food recognition could change how we monitor our eating habits? ๐ฌ Share your insights in the comments below or connect with us on social media:
Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial.
Abstract
BACKGROUND: Advances in artificial intelligence technology have raised new possibilities for the effective evaluation of daily dietary intake, but more empirical study is needed for the use of such technologies under realistic meal scenarios. This study developed an automated food recognition technology, which was then integrated into its previous design to improve usability for meal reporting. The newly developed app allowed for the automatic detection and recognition of multiple dishes within a single real-time food image as input. App performance was tested using young adults in authentic dining conditions.
OBJECTIVE: A 2-group comparative study was conducted to assess app performance using metrics including accuracy, efficiency, and user perception. The experimental group, named the automatic image-based reporting (AIR) group, was compared against a control group using the previous version, named the voice input reporting (VIR) group. Each application is primarily designed to facilitate a distinct method of food intake reporting. AIR users capture and upload images of their selected dishes, supplemented with voice commands where appropriate. VIR users supplement the uploaded image with verbal inputs for food names and attributes.
METHODS: The 2 mobile apps were subjected to a head-to-head parallel randomized evaluation. A cohort of 42 young adults aged 20-25 years (9 male and 33 female participants) was recruited from a university in Taiwan and randomly assigned to 2 groups, that is, AIR (n=22) and VIR (n=20). Both groups were assessed using the same menu of 17 dishes. Each meal was designed to represent a typical lunch or dinner setting, with 1 staple, 1 main course, and 3 side dishes. All participants used the app on the same type of smartphone, with the interfaces of both using uniform user interactions, icons, and layouts. Analysis of the gathered data focused on assessing reporting accuracy, time efficiency, and user perception.
RESULTS: For the AIR group, 86% (189/220) of dishes were correctly identified, whereas 68% (136/200) of dishes were accurately reported. The AIR group exhibited a significantly higher degree of identification accuracy compared to the VIR group (P<.001). The AIR group also required significantly less time to complete food reporting (P<.001). System usability scale scores showed both apps were perceived as having high usability and learnability (P=.20).
CONCLUSIONS: The AIR group outperformed the VIR group concerning accuracy and time efficiency for overall dish reporting within the meal testing scenario. While further technological enhancement may be required, artificial intelligence vision technology integration into existing mobile apps holds promise. Our results provide evidence-based contributions to the integration of automatic image recognition technology into existing apps in terms of user interaction efficacy and overall ease of use. Further empirical work is required, including full-scale randomized controlled trials and assessments of user perception under various conditions.
Author: [‘Sahoo PK’, ‘Chiu SY’, ‘Lin YS’, ‘Chen CH’, ‘Irianti D’, ‘Chen HY’, ‘Sarkar M’, ‘Liu YC’]
Journal: JMIR Mhealth Uhealth
Citation: Sahoo PK, et al. Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial. Automatic Image Recognition Meal Reporting Among Young Adults: Randomized Controlled Trial. 2025; 13:e60070. doi: 10.2196/60070