Safetensors
egogpt_qwen
multimodal
Choiszt commited on
Commit
51766a2
·
verified ·
1 Parent(s): 842e70f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +212 -0
README.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lmms-lab/EgoLife
5
+ base_model:
6
+ - lmms-lab/llava-onevision-qwen2-7b-ov
7
+ tags:
8
+ - multimodal
9
+ ---
10
+
11
+ # EgoGPT-7b-Demo
12
+
13
+ ## Model Summary
14
+
15
+ `EgoGPT-7b-Demo` is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. Built on the foundation of `llava-onevision-qwen2-7b-ov`, it has been finetuned on `EgoIT-EgoLife-138k` egocentric datasets, which contains [EgoIT-99k](https://huggingface.co/datasets/lmms-lab/EgoIT-99K) and depersonalized version of [EgoLife-QA (39k)](https://huggingface.co/datasets/lmms-lab/EgoLife).
16
+
17
+ EgoGPT excels in two primary scenarios:
18
+ - **Advanced Model Integration**: EgoGPT combines LLaVA-OneVision and Whisper, improving its ability to process visual and auditory information.
19
+ - **Outstanding Benchmark Performance:** EgoGPT excels in egocentric benchmarks like EgoSchema, EgoPlan, and EgoThink, surpassing leading commercial and open-source models.
20
+
21
+
22
+ For further details, please refer to the following resources:
23
+ - 📰 Paper: *Coming soon*
24
+ - 🪐 Project Page: https://github.com/EvolvingLMMs-Lab/EgoLife
25
+ - 📦 Datasets: https://huggingface.co/datasets/lmms-lab/EgoIT-99K & https://huggingface.co/datasets/lmms-lab/EgoLife
26
+ - 🤗 Model Collections: https://huggingface.co/collections/lmms-lab/egolife-67c04574c2a9b64ab312c342
27
+
28
+
29
+ ## Usage
30
+
31
+ ### Installation
32
+
33
+ 1. Clone this repository.
34
+
35
+ ```shell
36
+ git clone https://github.com/egolife-ntu/EgoLife
37
+ cd EgoLife/EgoGPT
38
+ ```
39
+
40
+ 2. Install the dependencies.
41
+
42
+ ```shell
43
+ conda create -n egogpt python=3.10
44
+ conda activate egogpt
45
+ pip install --upgrade pip
46
+ pip install -e .
47
+
48
+ 3. Install the dependencies for training and inference.
49
+
50
+ ```shell
51
+ pip install -e ".[train]"
52
+ pip install flash-attn --no-build-isolation
53
+ ```
54
+
55
+ ### Quick Start
56
+
57
+ ~~~python
58
+ import argparse
59
+ import copy
60
+ import os
61
+ import re
62
+ import sys
63
+ import warnings
64
+
65
+ import numpy as np
66
+ import requests
67
+ import soundfile as sf
68
+ import torch
69
+ import torch.distributed as dist
70
+ import whisper
71
+ from decord import VideoReader, cpu
72
+ from egogpt.constants import (
73
+ DEFAULT_IMAGE_TOKEN,
74
+ DEFAULT_SPEECH_TOKEN,
75
+ IGNORE_INDEX,
76
+ IMAGE_TOKEN_INDEX,
77
+ SPEECH_TOKEN_INDEX,
78
+ )
79
+ from egogpt.conversation import SeparatorStyle, conv_templates
80
+ from egogpt.mm_utils import get_model_name_from_path, process_images
81
+ from egogpt.model.builder import load_pretrained_model
82
+ from PIL import Image
83
+ from scipy.signal import resample
84
+
85
+
86
+ def setup(rank, world_size):
87
+ os.environ["MASTER_ADDR"] = "localhost"
88
+ os.environ["MASTER_PORT"] = "12355"
89
+ dist.init_process_group("gloo", rank=rank, world_size=world_size)
90
+
91
+
92
+ def load_video(video_path=None, audio_path=None, max_frames_num=16, fps=1):
93
+ if audio_path is not None:
94
+ speech, sample_rate = sf.read(audio_path)
95
+ if sample_rate != 16000:
96
+ target_length = int(len(speech) * 16000 / sample_rate)
97
+ speech = resample(speech, target_length)
98
+ if speech.ndim > 1:
99
+ speech = np.mean(speech, axis=1)
100
+ speech = whisper.pad_or_trim(speech.astype(np.float32))
101
+ speech = whisper.log_mel_spectrogram(speech, n_mels=128).permute(1, 0)
102
+ speech_lengths = torch.LongTensor([speech.shape[0]])
103
+ else:
104
+ speech = torch.zeros(3000, 128)
105
+ speech_lengths = torch.LongTensor([3000])
106
+
107
+ vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
108
+ total_frame_num = len(vr)
109
+ avg_fps = round(vr.get_avg_fps() / fps)
110
+ frame_idx = [i for i in range(0, total_frame_num, avg_fps)]
111
+ if max_frames_num > 0 and len(frame_idx) > max_frames_num:
112
+ uniform_sampled_frames = np.linspace(
113
+ 0, total_frame_num - 1, max_frames_num, dtype=int
114
+ )
115
+ frame_idx = uniform_sampled_frames.tolist()
116
+ video = vr.get_batch(frame_idx).asnumpy()
117
+ return video, speech, speech_lengths
118
+
119
+
120
+ def split_text(text, keywords):
121
+ pattern = "(" + "|".join(map(re.escape, keywords)) + ")"
122
+ parts = re.split(pattern, text)
123
+ parts = [part for part in parts if part]
124
+ return parts
125
+
126
+
127
+ def main(
128
+ pretrained_path="checkpoints/EgoGPT-7b-Demo",
129
+ video_path=None,
130
+ audio_path=None,
131
+ query="Please describe the video in detail.",
132
+ ):
133
+ warnings.filterwarnings("ignore")
134
+ setup(0, 1)
135
+ device = "cuda"
136
+ device_map = "cuda"
137
+
138
+ tokenizer, model, max_length = load_pretrained_model(
139
+ pretrained_path, device_map=device_map
140
+ )
141
+ model.eval()
142
+
143
+ conv_template = "qwen_1_5"
144
+ question = f"<image>\n<speech>\n\n{query}"
145
+ conv = copy.deepcopy(conv_templates[conv_template])
146
+ conv.append_message(conv.roles[0], question)
147
+ conv.append_message(conv.roles[1], None)
148
+ prompt_question = conv.get_prompt()
149
+
150
+ video, speech, speech_lengths = load_video(
151
+ video_path=video_path, audio_path=audio_path
152
+ )
153
+ speech = torch.stack([speech]).to(device).half()
154
+ processor = model.get_vision_tower().image_processor
155
+ processed_video = processor.preprocess(video, return_tensors="pt")["pixel_values"]
156
+ image = [(processed_video, video[0].size, "video")]
157
+
158
+ parts = split_text(prompt_question, ["<image>", "<speech>"])
159
+ input_ids = []
160
+ for part in parts:
161
+ if part == "<image>":
162
+ input_ids.append(IMAGE_TOKEN_INDEX)
163
+ elif part == "<speech>":
164
+ input_ids.append(SPEECH_TOKEN_INDEX)
165
+ else:
166
+ input_ids.extend(tokenizer(part).input_ids)
167
+
168
+ input_ids = torch.tensor(input_ids, dtype=torch.long).unsqueeze(0).to(device)
169
+ image_tensor = [image[0][0].half()]
170
+ image_sizes = [image[0][1]]
171
+ generate_kwargs = {"eos_token_id": tokenizer.eos_token_id}
172
+
173
+ cont = model.generate(
174
+ input_ids,
175
+ images=image_tensor,
176
+ image_sizes=image_sizes,
177
+ speech=speech,
178
+ speech_lengths=speech_lengths,
179
+ do_sample=False,
180
+ temperature=0.5,
181
+ max_new_tokens=4096,
182
+ modalities=["video"],
183
+ **generate_kwargs,
184
+ )
185
+ text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
186
+ print(text_outputs)
187
+
188
+
189
+ if __name__ == "__main__":
190
+ parser = argparse.ArgumentParser()
191
+ parser.add_argument(
192
+ "--pretrained_path", type=str, default="lmms-lab/EgoGPT-7b-Demo"
193
+ )
194
+ parser.add_argument("--video_path", type=str, default=None)
195
+ parser.add_argument("--audio_path", type=str, default=None)
196
+ parser.add_argument(
197
+ "--query", type=str, default="Please describe the video in detail."
198
+ )
199
+ args = parser.parse_args()
200
+ main(args.pretrained_path, args.video_path, args.audio_path, args.query)
201
+ ~~~
202
+
203
+
204
+ ## Citation
205
+ ```bibtex
206
+ @inproceedings{yang2025egolife,
207
+ title={EgoLife: Towards Egocentric Life Assistant},
208
+ author={Yang, Jingkang and Liu, Shuai and Guo, Hongming and Dong, Yuhao and Zhang, Xiamengwei and Zhang, Sicheng and Wang, Pengyun and Zhou, Zitang and Xie, Binzhu and Wang, Ziyue and Ouyang, Bei and Lin, Zhengyu and Cominelli, Marco and Cai, Zhongang and Zhang, Yuanhan and Zhang, Peiyuan and Hong, Fangzhou and Widmer, Joerg and Gringoli, Francesco and Yang, Lei and Li, Bo and Liu, Ziwei},
209
+ booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
210
+ year={2025},
211
+ }
212
+ ```