AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding
CoRR(2024)
Abstract
AI personal assistants deployed via robots or wearables require embodied
understanding to collaborate with humans effectively. However, current
Vision-Language Models (VLMs) primarily focus on third-person view videos,
neglecting the richness of egocentric perceptual experience. To address this
gap, we propose three key contributions. First, we introduce the Egocentric
Video Understanding Dataset (EVUD) for training VLMs on video captioning and
question answering tasks specific to egocentric videos. Second, we present
AlanaVLM, a 7B parameter VLM trained using parameter-efficient methods on EVUD.
Finally, we evaluate AlanaVLM's capabilities on OpenEQA, a challenging
benchmark for embodied video question answering. Our model achieves
state-of-the-art performance, outperforming open-source models including strong
Socratic models using GPT-4 as a planner by 3.6
Claude 3 and Gemini Pro Vision 1.0 and showcase competitive results compared to
Gemini Pro 1.5 and GPT-4V, even surpassing the latter in spatial reasoning.
This research paves the way for building efficient VLMs that can be deployed in
robots or wearables, leveraging embodied video understanding to collaborate
seamlessly with humans in everyday tasks, contributing to the next generation
of Embodied AI
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined