Abstract
We introduce EgoLife, a project to develop an egocentric life assistant that accompanies and enhances personal efficiency through AI-powered wearable glasses. To lay the foundation for this assistant, we conducted a comprehensive data collection study where six participants lived together for one week, continuously recording their daily activities - including discussions, shopping, cooking, socializing, and entertainment - using AI glasses for multimodal egocentric video capture, along with synchronized third-person-view video references. This effort resulted in the EgoLife Dataset, a comprehensive 300-hour egocentric, interpersonal, multiview, and multimodal daily life dataset with intensive annotation. Leveraging this dataset, we introduce EgoLifeQA, a suite of long-context, life-oriented question-answering tasks designed to provide meaningful assistance in daily life by addressing practical questions such as recalling past relevant events, monitoring health habits, and offering personalized recommendations. To address the key technical challenges of (1) developing robust visual-audio models for egocentric data, (2) enabling identity recognition, and (3) facilitating long-context question answering over extensive temporal information, we introduce EgoButler, an integrated system comprising EgoGPT and EgoRAG. EgoGPT is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. EgoRAG is a retrieval-based component that supports answering ultra-long-context questions. Our experimental studies verify their working mechanisms and reveal critical factors and bottlenecks, guiding future improvements. By releasing our datasets, models, and benchmarks, we aim to stimulate further research in egocentric AI assistants.
Community
Website: https://egolife-ai.github.io/
Blog: https://egolife-ai.github.io/blog/
Code: https://github.com/EvolvingLMMs-Lab/EgoLife
HuggingFace Dataset: https://huggingface.co./collections/lmms-lab/egolife-67c04574c2a9b64ab312c342
Self-deployed demo: https://egolife.lmms-lab.com/
HuggingFace demo: https://huggingface.co./spaces/Jingkang/EgoGPT-7B
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering (2025)
- X-LeBench: A Benchmark for Extremely Long Egocentric Video Understanding (2025)
- VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context Videos (2025)
- EgoNormia: Benchmarking Physical Social Norm Understanding (2025)
- EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild (2025)
- HD-EPIC: A Highly-Detailed Egocentric Video Dataset (2025)
- USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot Interactions (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend