ClownRat commited on
Commit
0c5b101
·
verified ·
1 Parent(s): d4ea3ca

Upload processor

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
added_tokens.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<image>": 151665,
4
+ "<tool_call>": 151657,
5
+ "<|box_end|>": 151649,
6
+ "<|box_start|>": 151648,
7
+ "<|endoftext|>": 151643,
8
+ "<|file_sep|>": 151664,
9
+ "<|fim_middle|>": 151660,
10
+ "<|fim_pad|>": 151662,
11
+ "<|fim_prefix|>": 151659,
12
+ "<|fim_suffix|>": 151661,
13
+ "<|im_end|>": 151645,
14
+ "<|im_start|>": 151644,
15
+ "<|image_pad|>": 151655,
16
+ "<|object_ref_end|>": 151647,
17
+ "<|object_ref_start|>": 151646,
18
+ "<|quad_end|>": 151651,
19
+ "<|quad_start|>": 151650,
20
+ "<|repo_name|>": 151663,
21
+ "<|stream_end|>": 151667,
22
+ "<|stream_start|>": 151666,
23
+ "<|video_pad|>": 151656,
24
+ "<|vision_end|>": 151653,
25
+ "<|vision_pad|>": 151654,
26
+ "<|vision_start|>": 151652
27
+ }
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "\n{%- set identifier = 'im' %}\n{% for message in messages %}\n {% if add_system_prompt and loop.first and message['role'] != 'system' %}\n {{- '<|im_start|>system\nYou are VideoLLaMA3 created by Alibaba DAMO Academy, a helpful assistant to help people understand images and videos.<|im_end|>\n' -}}\n {% endif %}\n {% if message['role'] == 'stream' %}\n {% set identifier = 'stream' %}\n {% else %}\n {% set identifier = 'im' %}\n {% endif %}\n {{- '<|' + identifier + '_start|>' + message['role'] + '\n' -}}\n {% if message['content'] is string %}\n {{- message['content'] + '<|' + identifier + '_end|>\n' -}}\n {% else %}\n {% for content in message['content'] %}\n {% if content is string %}\n {{- content -}}\n {% elif content['type'] == 'text' or 'text' in content %}\n {{- content['text'] -}}\n {% elif content['type'] == 'image' or 'image' in content %}\n {% if 'timestamp' in content %}\n {{- 'Time ' + content['timestamp'] | round(1) | string + 's: ' -}}\n {% endif %}\n {{- image_token + '\n' -}}\n {% elif content['type'] == 'video' or 'video' in content %}\n {% for i in range(content['num_frames']) %}\n {% if 'timestamps' in content %}\n {{- 'Time ' + content['timestamps'][i] | round(1) | string + 's:' -}}\n {% endif %}\n {% if i < content['num_frames'] - 1 %}\n {{- image_token + ',' -}}\n {% else %}\n {{- image_token + '\n' -}}\n {% endif %}\n {% endfor %}\n {% endif %}\n {% endfor %}\n {% if identifier == 'stream' %}\n {{- '<|' + identifier + '_end|>' -}}\n {% else %}\n {{- '<|' + identifier + '_end|>\n' -}}\n {% endif %}\n {% endif %}\n{% endfor %}\n{% if add_generation_prompt %}\n {{- '<|im_start|>assistant\n' -}}\n{% endif %}\n"
3
+ }
image_processing_videollama3.py ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adopted from https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py.
2
+ # Below is the original copyright:
3
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
6
+ # and OPT implementations in this library. It has been modified from its
7
+ # original forms to accommodate minor architectural differences compared
8
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+ """Image processor class for VideoLLaMA3."""
22
+
23
+ import math
24
+ from typing import Dict, List, Optional, Union
25
+
26
+ import numpy as np
27
+
28
+ import torch
29
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
30
+ from transformers.image_utils import ImageInput
31
+ from transformers.image_transforms import (
32
+ convert_to_rgb,
33
+ resize,
34
+ to_channel_dimension_format,
35
+ )
36
+ from transformers.image_utils import (
37
+ OPENAI_CLIP_MEAN,
38
+ OPENAI_CLIP_STD,
39
+ ChannelDimension,
40
+ ImageInput,
41
+ PILImageResampling,
42
+ VideoInput,
43
+ get_image_size,
44
+ infer_channel_dimension_format,
45
+ is_scaled_image,
46
+ is_valid_image,
47
+ make_list_of_images,
48
+ to_numpy_array,
49
+ )
50
+ from transformers.utils import TensorType, is_vision_available, logging
51
+
52
+
53
+ logger = logging.get_logger(__name__)
54
+
55
+
56
+ if is_vision_available():
57
+ from PIL import Image
58
+
59
+
60
+ def is_valid_video(video) -> bool:
61
+ if isinstance(video, (list, tuple)):
62
+ return all(is_valid_image(frame) for frame in video)
63
+ elif isinstance(video, np.ndarray):
64
+ return video.ndim == 4
65
+ elif isinstance(video, torch.Tensor):
66
+ return video.ndim == 4
67
+ return False
68
+
69
+
70
+ def make_batched_images(images) -> List[List[ImageInput]]:
71
+ """
72
+ Accepts images in list or nested list format, and makes a list of images for preprocessing.
73
+
74
+ Args:
75
+ images (`Union[List[List[ImageInput]], List[ImageInput], ImageInput]`):
76
+ The input image.
77
+
78
+ Returns:
79
+ list: A list of images.
80
+ """
81
+ if isinstance(images, (list, tuple)):
82
+ # list of images/videos
83
+ if not all(is_valid_video(image) or is_valid_image(image) for image in images):
84
+ raise ValueError(f"Could not make batched images from {images}")
85
+ return images
86
+ elif is_valid_video(images) or is_valid_image(images):
87
+ # single image/video
88
+ return [images]
89
+
90
+ raise ValueError(f"Could not make batched images from {images}")
91
+
92
+
93
+ def simple_batched_resize(
94
+ images, factor: int = 28, min_tokens: int = 4 * 4, max_tokens: int = 16384, input_data_format: str = None
95
+ ):
96
+ min_pixels = min_tokens * factor * factor
97
+ max_pixels = max_tokens * factor * factor
98
+
99
+ num_images = 0
100
+ for image in images:
101
+ if is_valid_video(image):
102
+ num_images += len(image)
103
+ else:
104
+ num_images += 1
105
+
106
+ image_sizes = []
107
+ for image in images:
108
+ if is_valid_video(image):
109
+ image = image[0]
110
+ if isinstance(image, Image.Image):
111
+ height, width = image.size
112
+ else:
113
+ height, width = get_image_size(image, channel_dim=input_data_format)
114
+ image_sizes.append([height, width])
115
+
116
+ tmp_image_sizes = []
117
+ for height, width in image_sizes:
118
+ h_bar = round(height / factor) * factor
119
+ w_bar = round(width / factor) * factor
120
+ if h_bar * w_bar > (max_pixels // num_images):
121
+ beta = math.sqrt((height * width) / (max_pixels // num_images))
122
+ h_bar = math.floor(height / beta / factor) * factor
123
+ w_bar = math.floor(width / beta / factor) * factor
124
+ # per image min_pixels
125
+ if h_bar * w_bar < min_pixels:
126
+ beta = math.sqrt(min_pixels / (height * width))
127
+ h_bar = math.ceil(height * beta / factor) * factor
128
+ w_bar = math.ceil(width * beta / factor) * factor
129
+ tmp_image_sizes.append((h_bar, w_bar))
130
+ image_sizes = tmp_image_sizes
131
+ return image_sizes
132
+
133
+
134
+ def batched_resize(
135
+ images, factors: List[int], min_tokens: int = 4 * 4, max_tokens: int = 16384, input_data_format: str = None
136
+ ):
137
+ image_sizes = []
138
+ for image in images:
139
+ if is_valid_video(image):
140
+ num_frame = len(image)
141
+ image = image[0]
142
+ else:
143
+ num_frame = 1
144
+ if isinstance(image, Image.Image):
145
+ height, width = image.size
146
+ else:
147
+ height, width = get_image_size(image, channel_dim=input_data_format)
148
+ image_sizes.append([num_frame, height, width])
149
+
150
+ # global max_pixels
151
+ smart_scale_factors = 1.0
152
+ total_tokens = 0
153
+ for (num_frame, height, width), factor in zip(image_sizes, factors):
154
+ total_tokens += num_frame * math.ceil(height / factor) * math.ceil(width / factor)
155
+
156
+ # TODO: add min_pixels
157
+ if total_tokens > max_tokens:
158
+ beta = math.sqrt(total_tokens / max_tokens)
159
+ tmp_image_sizes = []
160
+ for (_, height, width), factor in zip(image_sizes, factors):
161
+ h_bar = math.floor(height / beta / factor) * factor
162
+ w_bar = math.floor(width / beta / factor) * factor
163
+ tmp_image_sizes.append((h_bar, w_bar))
164
+ image_sizes = tmp_image_sizes
165
+ else:
166
+ tmp_image_sizes = []
167
+ for (_, height, width), factor in zip(image_sizes, factors):
168
+ height = round(height / factor) * factor
169
+ width = round(width / factor) * factor
170
+ tmp_image_sizes.append((height, width))
171
+ image_sizes = tmp_image_sizes
172
+
173
+ return image_sizes
174
+
175
+
176
+ class Videollama3ImageProcessor(BaseImageProcessor):
177
+ r"""
178
+ Constructs a DAMOVL image processor that dynamically resizes images based on the original images.
179
+
180
+ Args:
181
+ do_resize (`bool`, *optional*, defaults to `True`):
182
+ Whether to resize the image's (height, width) dimensions.
183
+ resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
184
+ Resampling filter to use when resizing the image.
185
+ do_rescale (`bool`, *optional*, defaults to `True`):
186
+ Whether to rescale the image by the specified scale `rescale_factor`.
187
+ rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
188
+ Scale factor to use if rescaling the image.
189
+ do_normalize (`bool`, *optional*, defaults to `True`):
190
+ Whether to normalize the image.
191
+ image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
192
+ Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
193
+ image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
194
+ Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image.
195
+ do_convert_rgb (`bool`, *optional*, defaults to `True`):
196
+ Whether to convert the image to RGB.
197
+ min_pixels (`int`, *optional*, defaults to `56 * 56`):
198
+ The min pixels of the image to resize the image.
199
+ max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`):
200
+ The max pixels of the image to resize the image.
201
+ patch_size (`int`, *optional*, defaults to 14):
202
+ The spacial patch size of the vision encoder.
203
+ """
204
+
205
+ model_input_names = ["pixel_values", "grid_sizes", "merge_sizes"]
206
+
207
+ def __init__(
208
+ self,
209
+ do_resize: bool = True,
210
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
211
+ do_rescale: bool = True,
212
+ rescale_factor: Union[int, float] = 1 / 255,
213
+ do_normalize: bool = True,
214
+ image_mean: Optional[Union[float, List[float]]] = None,
215
+ image_std: Optional[Union[float, List[float]]] = None,
216
+ do_convert_rgb: bool = True,
217
+ min_tokens: int = 4 * 4,
218
+ max_tokens: int = 16384,
219
+ patch_size: int = 14,
220
+ **kwargs,
221
+ ) -> None:
222
+ super().__init__(**kwargs)
223
+ self.do_resize = do_resize
224
+ self.resample = resample
225
+ self.do_rescale = do_rescale
226
+ self.rescale_factor = rescale_factor
227
+ self.do_normalize = do_normalize
228
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
229
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
230
+ self.min_tokens = min_tokens
231
+ self.max_tokens = max_tokens
232
+ self.patch_size = patch_size
233
+ self.do_convert_rgb = do_convert_rgb
234
+
235
+ def _preprocess(
236
+ self,
237
+ images: Union[ImageInput, VideoInput],
238
+ target_size: List[int],
239
+ merge_size: int = 1,
240
+ do_resize: bool = None,
241
+ resample: PILImageResampling = None,
242
+ do_rescale: bool = None,
243
+ rescale_factor: float = None,
244
+ do_normalize: bool = None,
245
+ image_mean: Optional[Union[float, List[float]]] = None,
246
+ image_std: Optional[Union[float, List[float]]] = None,
247
+ do_convert_rgb: bool = None,
248
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
249
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
250
+ ):
251
+ """
252
+ Preprocess an image or batch of images. Copy of the `preprocess` method from `CLIPImageProcessor`.
253
+
254
+ Args:
255
+ images (`ImageInput`):
256
+ Image or batch of images to preprocess. Expects pixel values ranging from 0 to 255. If pixel values range from 0 to 1, set `do_rescale=False`.
257
+ target_size (`List[int]`):
258
+ The target size to resize the image to. Should be a list of two integers: [target_height, target_width].
259
+ merge_size (`int`, *optional*, defaults to `1`):
260
+ The merge size after the vision encoder.
261
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
262
+ Whether to resize the image.
263
+ resample (`PILImageResampling`, *optional*, defaults to `self.resample`):
264
+ Resampling filter to use if resizing the image. This can be one of the `PILImageResampling` enums.
265
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
266
+ Whether to rescale the image.
267
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
268
+ Scale factor to use if rescaling the image.
269
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
270
+ Whether to normalize the image.
271
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
272
+ Mean to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
273
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
274
+ Standard deviation to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
275
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
276
+ Whether to convert the image to RGB.
277
+ data_format (`ChannelDimension`, *optional*, defaults to `ChannelDimension.FIRST`):
278
+ The channel dimension format for the output image. Can be one of:
279
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
280
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
281
+ - Unset: Use the channel dimension format of the input image.
282
+ input_data_format (`ChannelDimension` or `str`, *optional*):
283
+ The channel dimension format for the input image. Can be one of:
284
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
285
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
286
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
287
+ """
288
+ images = make_list_of_images(images)
289
+
290
+ if do_convert_rgb:
291
+ images = [convert_to_rgb(image) for image in images]
292
+
293
+ # All transformations expect numpy arrays.
294
+ images = [to_numpy_array(image) for image in images]
295
+
296
+ if is_scaled_image(images[0]) and do_rescale:
297
+ logger.warning_once(
298
+ "It looks like you are trying to rescale already rescaled images. If the input"
299
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
300
+ )
301
+ if input_data_format is None:
302
+ # We assume that all images have the same channel dimension format.
303
+ input_data_format = infer_channel_dimension_format(images[0])
304
+
305
+ height, width = get_image_size(images[0], channel_dim=input_data_format)
306
+ resized_height, resized_width = height, width
307
+ processed_images = []
308
+ for image in images:
309
+ if do_resize:
310
+ resized_height, resized_width = target_size
311
+ image = resize(
312
+ image, size=(resized_height, resized_width), resample=resample, input_data_format=input_data_format
313
+ )
314
+
315
+ if do_rescale:
316
+ image = self.rescale(image, scale=rescale_factor, input_data_format=input_data_format)
317
+
318
+ if do_normalize:
319
+ image = self.normalize(
320
+ image=image, mean=image_mean, std=image_std, input_data_format=input_data_format
321
+ )
322
+
323
+ image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format)
324
+ processed_images.append(image)
325
+
326
+ patches = np.array(processed_images)
327
+ if data_format == ChannelDimension.LAST:
328
+ patches = patches.transpose(0, 3, 1, 2)
329
+ t = patches.shape[0]
330
+ channel = patches.shape[1]
331
+ grid_h, grid_w = resized_height // self.patch_size, resized_width // self.patch_size
332
+ patches = patches.reshape(
333
+ t,
334
+ channel,
335
+ grid_h // merge_size,
336
+ merge_size,
337
+ self.patch_size,
338
+ grid_w // merge_size,
339
+ merge_size,
340
+ self.patch_size,
341
+ )
342
+ patches = patches.transpose(0, 2, 5, 3, 6, 1, 4, 7)
343
+ flatten_patches = patches.reshape(
344
+ t * grid_h * grid_w, channel * self.patch_size * self.patch_size
345
+ )
346
+
347
+ return flatten_patches, (t, grid_h, grid_w)
348
+
349
+ def preprocess(
350
+ self,
351
+ images: ImageInput,
352
+ do_resize: bool = None,
353
+ resample: PILImageResampling = None,
354
+ do_rescale: bool = None,
355
+ rescale_factor: float = None,
356
+ do_normalize: bool = None,
357
+ image_mean: Optional[Union[float, List[float]]] = None,
358
+ image_std: Optional[Union[float, List[float]]] = None,
359
+ do_convert_rgb: bool = None,
360
+ merge_size: Optional[Union[int, List[int]]] = None,
361
+ return_tensors: Optional[Union[str, TensorType]] = None,
362
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
363
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
364
+ ):
365
+ """
366
+ Args:
367
+ images (`ImageInput`):
368
+ Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
369
+ passing in images with pixel values between 0 and 1, set `do_rescale=False`.
370
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
371
+ Whether to resize the image.
372
+ resample (`int`, *optional*, defaults to `self.resample`):
373
+ Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
374
+ has an effect if `do_resize` is set to `True`.
375
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
376
+ Whether to rescale the image.
377
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
378
+ Rescale factor to rescale the image by if `do_rescale` is set to `True`.
379
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
380
+ Whether to normalize the image.
381
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
382
+ Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
383
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
384
+ Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
385
+ `True`.
386
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
387
+ Whether to convert the image to RGB.
388
+ return_tensors (`str` or `TensorType`, *optional*):
389
+ The type of tensors to return. Can be one of:
390
+ - Unset: Return a list of `np.ndarray`.
391
+ - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
392
+ - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
393
+ - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
394
+ - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
395
+ data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
396
+ The channel dimension format for the output image. Can be one of:
397
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
398
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
399
+ - Unset: Use the channel dimension format of the input image.
400
+ input_data_format (`ChannelDimension` or `str`, *optional*):
401
+ The channel dimension format for the input image. If unset, the channel dimension format is inferred
402
+ from the input image. Can be one of:
403
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
404
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
405
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
406
+
407
+ """
408
+ do_resize = do_resize if do_resize is not None else self.do_resize
409
+ resample = resample if resample is not None else self.resample
410
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
411
+ rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
412
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
413
+ image_mean = image_mean if image_mean is not None else self.image_mean
414
+ image_std = image_std if image_std is not None else self.image_std
415
+ merge_size = merge_size if merge_size is not None else self.merge_size
416
+ do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
417
+
418
+ images = make_batched_images(images)
419
+
420
+ if isinstance(merge_size, (list, tuple)):
421
+ assert len(merge_size) == len(images), "Merge size must be the same length as images."
422
+ merge_sizes = merge_size
423
+ else:
424
+ merge_sizes = [merge_size for _ in images]
425
+
426
+ if all(merge_size == merge_sizes[0] for merge_size in merge_sizes):
427
+ target_sizes = simple_batched_resize(
428
+ images,
429
+ factor=self.patch_size * merge_sizes[0],
430
+ min_tokens=self.min_tokens,
431
+ max_tokens=self.max_tokens,
432
+ input_data_format=input_data_format,
433
+ )
434
+ else:
435
+ target_sizes = batched_resize(
436
+ images,
437
+ factors=[self.patch_size * merge_size for merge_size in merge_sizes],
438
+ min_tokens=self.min_tokens,
439
+ max_tokens=self.max_tokens,
440
+ input_data_format=input_data_format,
441
+ )
442
+
443
+ pixel_values, grid_sizes = [], []
444
+ for image, merge_size, target_size in zip(images, merge_sizes, target_sizes):
445
+ patches, grid_size = self._preprocess(
446
+ image,
447
+ target_size=target_size,
448
+ merge_size=merge_size,
449
+ do_resize=do_resize,
450
+ resample=resample,
451
+ do_rescale=do_rescale,
452
+ rescale_factor=rescale_factor,
453
+ do_normalize=do_normalize,
454
+ image_mean=image_mean,
455
+ image_std=image_std,
456
+ data_format=data_format,
457
+ do_convert_rgb=do_convert_rgb,
458
+ input_data_format=input_data_format,
459
+ )
460
+ pixel_values.append(patches)
461
+ grid_sizes.append(grid_size)
462
+
463
+ pixel_values = np.concatenate(pixel_values, axis=0)
464
+ grid_sizes = np.array(grid_sizes)
465
+ merge_sizes = np.array(merge_sizes)
466
+
467
+ data = {
468
+ "pixel_values": pixel_values,
469
+ "grid_sizes": grid_sizes,
470
+ "merge_sizes": merge_sizes,
471
+ }
472
+
473
+ return BatchFeature(data=data, tensor_type=return_tensors)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "image_processing_videollama3.Videollama3ImageProcessor",
4
+ "AutoProcessor": "processing_videollama3.Videollama3Qwen2Processor"
5
+ },
6
+ "do_convert_rgb": true,
7
+ "do_normalize": true,
8
+ "do_rescale": true,
9
+ "do_resize": true,
10
+ "image_mean": [
11
+ 0.5,
12
+ 0.5,
13
+ 0.5
14
+ ],
15
+ "image_processor_type": "Videollama3ImageProcessor",
16
+ "image_std": [
17
+ 0.5,
18
+ 0.5,
19
+ 0.5
20
+ ],
21
+ "max_tokens": 16384,
22
+ "min_tokens": 16,
23
+ "patch_size": 14,
24
+ "processor_class": "Videollama3Qwen2Processor",
25
+ "resample": 3,
26
+ "rescale_factor": 0.00392156862745098
27
+ }
processing_videollama3.py ADDED
@@ -0,0 +1,891 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Processor class for VideoLLaMA3."""
2
+
3
+ import copy
4
+ import importlib.util
5
+ import os
6
+ import os.path as osp
7
+ import warnings
8
+ from collections import defaultdict
9
+ from typing import Any, List, Union, Dict, Optional, Tuple, TypedDict
10
+
11
+ import cv2
12
+ import ffmpeg
13
+ import imageio
14
+ import json
15
+ import numpy as np
16
+ import torch
17
+ import transformers
18
+ from decord import VideoReader, cpu
19
+ from PIL import Image
20
+ from transformers.feature_extraction_utils import BatchFeature
21
+ from transformers.image_utils import ImageInput
22
+ from transformers.processing_utils import ProcessingKwargs, ProcessorMixin, Unpack
23
+ from transformers.tokenization_utils_base import PreTokenizedInput, TextInput
24
+
25
+ try:
26
+ from . import image_processing_videollama3
27
+ from .image_processing_videollama3 import (
28
+ is_valid_image, is_valid_video,
29
+ )
30
+ except ModuleNotFoundError:
31
+ spec = importlib.util.spec_from_file_location(
32
+ "image_processing_videollama3",
33
+ osp.join(osp.dirname(__file__), "image_processing_videollama3.py"),
34
+ )
35
+ image_processing_videollama3 = importlib.util.module_from_spec(spec)
36
+ spec.loader.exec_module(image_processing_videollama3)
37
+ is_valid_image = getattr(image_processing_videollama3, "is_valid_image")
38
+ is_valid_video = getattr(image_processing_videollama3, "is_valid_video")
39
+
40
+ # constants
41
+ DEFAULT_IMAGE_TOKEN = "<image>"
42
+ IGNORE_INDEX = -100
43
+
44
+ # Type aliases
45
+ Conversation = List[Dict[str, Any]]
46
+ SingleImage = Union[Image.Image, np.ndarray, torch.Tensor]
47
+ SingleVideo = Union[List[SingleImage], np.ndarray, torch.Tensor]
48
+ BatchedImage = List[Union[SingleImage, SingleVideo]]
49
+ BatchedNamedImage = List[Tuple[str, Union[SingleImage, SingleVideo]]]
50
+
51
+
52
+ def _custom_import(class_name: str):
53
+ try:
54
+ attribute_class = getattr(transformers, class_name)
55
+ except AttributeError:
56
+ attribute_class = getattr(image_processing_videollama3, class_name)
57
+ return attribute_class
58
+
59
+
60
+ def is_named_image(image) -> bool:
61
+ return isinstance(image, (list, tuple)) and \
62
+ len(image) == 2 and \
63
+ isinstance(image[0], str) and \
64
+ image[0] in ["image", "video"] and \
65
+ (is_valid_image(image[1]) or is_valid_video(image[1]))
66
+
67
+
68
+ def make_batched_images(images) -> List[List[ImageInput]]:
69
+ if isinstance(images, (list, tuple)) and all(is_named_image(image) for image in images):
70
+ # list of named images
71
+ return [image[0] for image in images], [image[1] for image in images]
72
+ elif isinstance(images, (list, tuple)) and all(is_valid_image(image) or is_valid_video(image) for image in images):
73
+ # list of images/videos
74
+ batch = []
75
+ for image in images:
76
+ if is_valid_video(image):
77
+ batch.append(("video", image))
78
+ elif is_valid_image(image):
79
+ batch.append(("image", image))
80
+ else:
81
+ raise ValueError(f"Could not make batched images from {images}")
82
+ return [x[0] for x in batch], [x[1] for x in batch]
83
+ elif is_named_image(images):
84
+ # named images
85
+ return [images[0]], [image[1]]
86
+ elif is_valid_video(images):
87
+ # single video
88
+ return ["video"], [images]
89
+ elif is_valid_image(images):
90
+ # single image
91
+ return ["image"], [images]
92
+
93
+ raise ValueError(f"Could not make batched images from {images}")
94
+
95
+
96
+ def frame_sample(duration, mode='uniform', num_frames=None, vid_fps=None, fps=None):
97
+ if mode == 'uniform':
98
+ assert num_frames is not None, "Number of frames must be provided for uniform sampling."
99
+ if duration <= num_frames:
100
+ return np.arange(duration).astype(int)
101
+ # NOTE: v1 version
102
+ # Calculate the size of each segment from which a frame will be extracted
103
+ # if duration <= num_frames:
104
+ # return np.arange(duration).astype(int)
105
+ # seg_size = float(duration - 1) / num_frames
106
+
107
+ # frame_ids = []
108
+ # for i in range(num_frames):
109
+ # # Calculate the start and end indices of each segment
110
+ # start = seg_size * i
111
+ # end = seg_size * (i + 1)
112
+ # # Append the middle index of the segment to the list
113
+ # frame_ids.append((start + end) / 2)
114
+
115
+ # return np.round(np.array(frame_ids) + 1e-6).astype(int)
116
+ # NOTE: v0 version
117
+ return np.linspace(0, duration-1, num_frames, dtype=int)
118
+ elif mode == 'fps':
119
+ assert vid_fps is not None, "FPS must be provided for FPS sampling."
120
+ assert fps is not None, "FPS must be provided for FPS sampling."
121
+ segment_len = min(vid_fps // fps, duration)
122
+ return np.arange(segment_len // 2, duration, segment_len, dtype=int)
123
+ else:
124
+ raise ImportError(f'Unsupported frame sampling mode: {mode}')
125
+
126
+
127
+ def load_video_from_ids(video_path, s=None, e=None, fps=None, max_frames=128, temporal_factor=1):
128
+ if s is not None and e is not None:
129
+ s = s if s >= 0. else 0.
130
+ e = e if e >= 0. else 0.
131
+ if s > e:
132
+ s, e = e, s
133
+ elif s == e:
134
+ e = s + 1
135
+
136
+ # 1. Loading Video
137
+ if os.path.isdir(video_path):
138
+ frame_files = sorted(os.listdir(video_path))
139
+
140
+ vid_fps = 3
141
+ num_frames_of_video = len(frame_files)
142
+ elif video_path.endswith('.gif'):
143
+ gif_reader = imageio.get_reader(video_path)
144
+
145
+ vid_fps = 25
146
+ num_frames_of_video = len(gif_reader)
147
+ else:
148
+ vreader = VideoReader(video_path, ctx=cpu(0), num_threads=2)
149
+ # vreader = VideoReader(video_path, ctx=cpu(0), num_threads=1)
150
+
151
+ vid_fps = vreader.get_avg_fps()
152
+ num_frames_of_video = len(vreader)
153
+
154
+ # 2. Determine frame range & Calculate frame indices
155
+ f_start = 0 if s is None else max(int(s * vid_fps) - 1, 0)
156
+ f_end = num_frames_of_video - 1 if e is None else min(int(e * vid_fps) - 1, num_frames_of_video - 1)
157
+ frame_indices = list(range(f_start, f_end + 1))
158
+
159
+ duration = len(frame_indices)
160
+ # 3. Sampling frame indices
161
+ if fps is not None and duration / vid_fps < max_frames:
162
+ sampled_frame_indices = [frame_indices[i] for i in frame_sample(duration, mode='fps', vid_fps=vid_fps, fps=fps)]
163
+ else:
164
+ sampled_frame_indices = [frame_indices[i] for i in frame_sample(duration, mode='uniform', num_frames=max_frames)]
165
+
166
+ # 4. Acquire frame data
167
+ if os.path.isdir(video_path):
168
+ frames = np.array([cv2.cvtColor(cv2.imread(os.path.join(video_path, frame_files[frame_idx])), cv2.COLOR_BGR2RGB) for frame_idx in sampled_frame_indices])
169
+ elif video_path.endswith('.gif'):
170
+ frames = np.array([cv2.cvtColor(frame, cv2.COLOR_RGBA2RGB) for idx, frame in enumerate(gif_reader) if idx in sampled_frame_indices])
171
+ else:
172
+ frames = vreader.get_batch(sampled_frame_indices).asnumpy()
173
+
174
+ frames = frames.transpose(0, 3, 1, 2)
175
+ timestamps = [x / vid_fps for x in sampled_frame_indices]
176
+
177
+ if temporal_factor > 1:
178
+ pad_length = temporal_factor - len(frames) % temporal_factor
179
+ frames = np.concatenate([frames, frames[-1:].repeat(pad_length, axis=0)])
180
+ [timestamps.append(timestamps[-1] + 1 / fps) for _ in range(pad_length)]
181
+
182
+ frames = [frame for frame in frames]
183
+
184
+ return frames, timestamps
185
+
186
+
187
+ class ChatTemplateKwargs(TypedDict, total=False):
188
+
189
+ chat_template: Optional[str]
190
+ add_system_prompt: Optional[bool]
191
+ add_generation_prompt: Optional[bool]
192
+
193
+
194
+ class Videollama3Qwen2ProcessorKwargs(ProcessingKwargs, ChatTemplateKwargs, total=False):
195
+
196
+ chat_template_kwargs: ChatTemplateKwargs = {
197
+ **ChatTemplateKwargs.__annotations__,
198
+ }
199
+
200
+ _defaults = {
201
+ "text_kwargs": {
202
+ "padding": False,
203
+ },
204
+ "image_kwargs": {
205
+ "merge_size": None,
206
+ },
207
+ "chat_template_kwargs": {
208
+ "chat_template": None,
209
+ "add_system_prompt": False,
210
+ "add_generation_prompt": False,
211
+ },
212
+ }
213
+
214
+
215
+ class Videollama3Qwen2Processor(ProcessorMixin):
216
+
217
+ attributes = ["image_processor", "tokenizer"]
218
+ image_processor_class = "Videollama3ImageProcessor"
219
+ tokenizer_class = ("Qwen2Tokenizer", "Qwen2TokenizerFast")
220
+ valid_kwargs = ["chat_template", "image_merge_size", "video_merge_size", "fps", "max_frames"]
221
+
222
+ def __init__(
223
+ self,
224
+ image_processor=None,
225
+ tokenizer=None,
226
+ chat_template: str = None,
227
+ image_merge_size: int = 1,
228
+ video_merge_size: int = 2,
229
+ fps: Optional[int] = 1,
230
+ max_frames: Optional[int] = 128,
231
+ ):
232
+ self.image_processor = image_processor
233
+ self.tokenizer = tokenizer
234
+ if chat_template is None:
235
+ chat_template = self.tokenizer.chat_template
236
+ self.chat_template = chat_template
237
+
238
+ self.image_merge_size = image_merge_size
239
+ self.video_merge_size = video_merge_size
240
+ self.fps = fps
241
+ self.max_frames = max_frames
242
+
243
+ self.generation_prompt = self._infer_generation_prompt()
244
+ self.generation_prompt_ids = self.tokenizer.encode(self.generation_prompt, return_tensors="pt")
245
+ self.generation_prompt_length = len(self.generation_prompt_ids[0])
246
+ self.image_token_id = self.tokenizer.convert_tokens_to_ids(DEFAULT_IMAGE_TOKEN)
247
+ self.eos_token_id = self.tokenizer.eos_token_id
248
+
249
+ @classmethod
250
+ def _get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
251
+ args = []
252
+ for attribute_name in cls.attributes:
253
+ class_name = getattr(cls, f"{attribute_name}_class")
254
+ if isinstance(class_name, tuple):
255
+ classes = tuple(_custom_import(n) if n is not None else None for n in class_name)
256
+ use_fast = kwargs.get("use_fast", True)
257
+ if use_fast and classes[1] is not None:
258
+ attribute_class = classes[1]
259
+ else:
260
+ attribute_class = classes[0]
261
+ else:
262
+ attribute_class = _custom_import(class_name)
263
+
264
+ args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
265
+ return args
266
+
267
+ def get_generation_prompt(self):
268
+ return self.generation_prompt
269
+
270
+ def get_generation_prompt_ids(self):
271
+ return self.generation_prompt_ids
272
+
273
+ def _infer_generation_prompt(self):
274
+ pseudo_message = [{"role": "user", "content": ""}]
275
+ instruction = self.apply_chat_template(pseudo_message, tokenize=False, add_generation_prompt=True)
276
+ conversation = self.apply_chat_template(pseudo_message, tokenize=False, add_generation_prompt=False)
277
+ return instruction.replace(conversation, "")
278
+
279
+ def _get_downsampled_grid_sizes(self, image_inputs: Dict[str, Any]):
280
+ grid_sizes = []
281
+ for grid_size, merge_size in zip(image_inputs.get("grid_sizes", []), image_inputs.get("merge_sizes", [])):
282
+ if not torch.all(grid_size[1:] % merge_size == 0):
283
+ warnings.warn(f"Grid size {grid_size} is not divisible by merge size. Some undesired errors may occur.")
284
+ if grid_size[0] == 1:
285
+ grid_sizes.append(grid_size[1:] / merge_size)
286
+ elif grid_size[0] > 1:
287
+ grid_sizes.extend([grid_size[1:] / merge_size] * grid_size[0])
288
+ return grid_sizes
289
+
290
+ def _get_visual_seq_len(self, grid_size: torch.Tensor):
291
+ num_tokens = int(grid_size.prod().item())
292
+ return num_tokens
293
+
294
+ def load_images(self, image_path: Union[str, List[str], Image.Image, List[Image.Image]]):
295
+ if isinstance(image_path, str) and os.path.isfile(image_path):
296
+ # images = [cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB)]
297
+ images = [Image.open(image_path).convert('RGB')]
298
+ elif isinstance(image_path, str) and os.path.isdir(image_path):
299
+ # images = [cv2.cvtColor(cv2.imread(os.path.join(image_path, f)), cv2.COLOR_BGR2RGB) for f in sorted(os.listdir(image_path))]
300
+ images = [Image.open(os.path.join(image_path, f)).convert('RGB') for f in sorted(os.listdir(image_path))]
301
+ elif isinstance(image_path, list) and isinstance(image_path[0], str):
302
+ # images = [cv2.cvtColor(cv2.imread(f), cv2.COLOR_BGR2RGB) for f in image_path]
303
+ images = [Image.open(f).convert('RGB') for f in image_path]
304
+ elif isinstance(image_path, list) and isinstance(image_path[0], Image.Image):
305
+ images = [np.array(x) for x in image_path]
306
+ elif isinstance(image_path, Image.Image):
307
+ images = [np.array(image_path)]
308
+ else:
309
+ raise ValueError(f"Unsupported image path type: {type(image_path)}")
310
+ return images
311
+
312
+ def load_video(
313
+ self,
314
+ video_path: str,
315
+ start_time: Optional[float] = None,
316
+ end_time: Optional[float] = None,
317
+ fps: Optional[float] = None,
318
+ max_frames: Optional[float] = None,
319
+ size: Optional[int] = None,
320
+ size_divisible: int = 1,
321
+ precise_time: bool = False,
322
+ verbose: bool = False,
323
+ temporal_factor: int = 1
324
+ ):
325
+ """
326
+ Load and process a video file and return the frames and the timestamps of each frame.
327
+
328
+ Args:
329
+ video_path (str): Path to the video file.
330
+ start_time (float, optional): Start time in seconds. Defaults to None.
331
+ end_time (float, optional): End time in seconds. Defaults to None.
332
+ fps (float, optional): Frames per second. Defaults to None.
333
+ num_frames (float, optional): Number of frames to sample. Defaults to None.
334
+ size (int, optional): Size of the shortest side. Defaults to None.
335
+ size_divisible (int, optional): Size divisible by this number. Defaults to 1.
336
+ precise_time (bool, optional): Whether to use precise time. Defaults to False.
337
+ verbose (bool, optional): Print ffmpeg output. Defaults to False.
338
+
339
+ Returns:
340
+ frames (List[PIL.Image]): List of frames.
341
+ timestamps (List[float]): List of timestamps.
342
+ """
343
+ fps = self.fps if fps is None else fps
344
+ max_frames = self.max_frames if max_frames is None else max_frames
345
+
346
+ if start_time is not None and end_time is not None and end_time - start_time < 1:
347
+ return load_video_from_ids(video_path, start_time, end_time, fps=fps, max_frames=max_frames)
348
+ if os.path.isdir(video_path):
349
+ return load_video_from_ids(video_path, start_time, end_time, fps=fps, max_frames=max_frames)
350
+ if video_path.endswith('.gif'):
351
+ return load_video_from_ids(video_path, start_time, end_time, fps=fps, max_frames=max_frames)
352
+ probe = ffmpeg.probe(video_path)
353
+ duration = float(probe['format']['duration'])
354
+ video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
355
+ w, h = int(video_stream['width']), int(video_stream['height'])
356
+
357
+ kwargs, input_kwargs, output_kwargs = {}, {}, {}
358
+ do_trim = start_time is not None or end_time is not None
359
+ if start_time is not None:
360
+ new_start_time = max(float(video_stream['start_time']), start_time)
361
+ duration -= new_start_time - start_time
362
+ start_time = new_start_time
363
+ else:
364
+ start_time = float(video_stream['start_time'])
365
+ if end_time is not None:
366
+ duration = min(duration, end_time - start_time)
367
+ else:
368
+ duration = duration
369
+ if do_trim:
370
+ kwargs = {'ss': start_time, 't': duration}
371
+ if precise_time:
372
+ output_kwargs.update(kwargs)
373
+ else:
374
+ input_kwargs.update(kwargs)
375
+
376
+ if size is not None:
377
+ scale_factor = size / min(w, h)
378
+ new_w, new_h = round(w * scale_factor), round(h * scale_factor)
379
+ else:
380
+ new_w, new_h = w, h
381
+ new_w = new_w // size_divisible * size_divisible
382
+ new_h = new_h // size_divisible * size_divisible
383
+
384
+ # NOTE: It may result in unexpected number of frames in ffmpeg
385
+ # if calculate the fps directly according to max_frames
386
+ # if max_frames is not None and (fps is None or duration * fps > 2 * max_frames):
387
+ # fps = round(max_frames / duration * 2)
388
+
389
+ stream = ffmpeg.input(video_path, **input_kwargs)
390
+ if fps is not None:
391
+ stream = ffmpeg.filter(stream, "fps", fps=fps, round="down")
392
+ if new_w != w or new_h != h:
393
+ stream = ffmpeg.filter(stream, 'scale', new_w, new_h)
394
+ stream = ffmpeg.output(stream, "pipe:", format="rawvideo", pix_fmt="rgb24", **output_kwargs)
395
+ out, _ = ffmpeg.run(stream, capture_stdout=True, quiet=not verbose)
396
+
397
+ frames = np.frombuffer(out, np.uint8).reshape([-1, new_h, new_w, 3]).transpose([0, 3, 1, 2])
398
+
399
+ if fps is not None:
400
+ timestamps = np.arange(start_time, start_time + duration + 1 / fps, 1 / fps)[:len(frames)]
401
+ else:
402
+ timestamps = np.linspace(start_time, start_time + duration, len(frames))
403
+
404
+ if max_frames is not None and len(frames) > max_frames:
405
+ indices = np.linspace(0, len(frames) - 1, max_frames, dtype=int)
406
+ frames = frames[indices]
407
+ timestamps = timestamps[indices]
408
+
409
+ if temporal_factor > 1:
410
+ pad_length = temporal_factor - len(frames) % temporal_factor
411
+ frames = np.concatenate([frames, frames[-1:].repeat(pad_length, axis=0)])
412
+ timestamps = np.concatenate([timestamps, timestamps[-1:].repeat(pad_length) + np.arange(1, pad_length + 1) / fps])
413
+
414
+ frames = [frame for frame in frames]
415
+ timestamps = [timestamp for timestamp in timestamps]
416
+
417
+ return frames, timestamps
418
+
419
+ def _load_multimodal_data(self, conversation: Conversation):
420
+ multimodal_info = defaultdict(list)
421
+ new_conversation = []
422
+ for message in conversation:
423
+ new_message = {"role": message["role"]}
424
+ if not isinstance(message["content"], (list, tuple)):
425
+ new_message["content"] = message["content"]
426
+ new_conversation.append(new_message)
427
+ continue
428
+
429
+ new_contents = []
430
+ for content in message["content"]:
431
+ if not isinstance(content, dict):
432
+ new_contents.append(content)
433
+ continue
434
+ assert "type" in content, "Content must have 'type' field."
435
+ if content["type"] in ["image", "video"] and content["type"] in content and isinstance(content[content["type"]], dict):
436
+ # TODO: support other types which are not compatible with json
437
+ load_args = content[content["type"]]
438
+ data_id = json.dumps({k: v for k, v in load_args.items() if not k in ["start_time", "end_time"]})
439
+ new_content = copy.deepcopy(content)
440
+ multimodal_info[data_id].append(new_content)
441
+ new_contents.append(new_content)
442
+ else:
443
+ new_contents.append(content)
444
+
445
+ new_message["content"] = new_contents
446
+ new_conversation.append(new_message)
447
+
448
+ for data_id, contents in multimodal_info.items():
449
+ data_type = contents[0]["type"]
450
+ if data_type == "image":
451
+ image = self.load_images(contents[0][data_type]["image_path"])[0]
452
+ for content in contents:
453
+ content["image"] = [image.copy()]
454
+
455
+ elif data_type == "video":
456
+ # TODO: start_time is None?
457
+ start_times = [content["video"].get("start_time", 0.) for content in contents]
458
+ end_times = [content["video"].get("end_time", float("inf")) for content in contents]
459
+
460
+ load_args = contents[0][data_type]
461
+ start_time, end_time = min(start_times), max(end_times)
462
+ if start_time > 0:
463
+ load_args["start_time"] = start_time
464
+ if end_time < float("inf"):
465
+ load_args["end_time"] = end_time
466
+ images, timestamps = self.load_video(**load_args)
467
+
468
+ for content, start_time, end_time in zip(contents, start_times, end_times):
469
+ cur_images, cur_timestamps = [], []
470
+ for image, timestamp in zip(images, timestamps):
471
+ if start_time <= timestamp <= end_time:
472
+ cur_images.append(image.copy())
473
+ cur_timestamps.append(timestamp)
474
+
475
+ content[data_type] = cur_images
476
+ content["num_frames"] = len(cur_images)
477
+ content["timestamps"] = cur_timestamps
478
+
479
+ return new_conversation
480
+
481
+ def _gather_multimodal_data(self, conversation: Conversation):
482
+ images = []
483
+ for message in conversation:
484
+ if not isinstance(message["content"], (list, tuple)):
485
+ continue
486
+ for content in message["content"]:
487
+ if not isinstance(content, dict):
488
+ continue
489
+ if content["type"] == "video":
490
+ video = content["video"]
491
+ assert is_valid_video(video), f"Invalid video data: {video}."
492
+ images.append(("video", video))
493
+ if content["type"] == "image":
494
+ image = content["image"]
495
+ images.append(("image", image))
496
+ images = images if len(images) > 0 else None
497
+ return images
498
+
499
+ def _process_conversation_with_label(
500
+ self,
501
+ conversation: Conversation,
502
+ image_inputs: Dict[str, Any],
503
+ **kwargs,
504
+ ):
505
+ assert kwargs.pop("return_tensors", "pt") == "pt", "Only PyTorch tensors are supported when return_labels=True."
506
+ assert not "add_generation_prompt" in kwargs, "'add_generation_prompt' argument is not supported when return_labels=True."
507
+
508
+ output_kwargs = self._merge_kwargs(
509
+ Videollama3Qwen2ProcessorKwargs,
510
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
511
+ **kwargs,
512
+ )
513
+ output_kwargs["chat_template_kwargs"].pop("add_generation_prompt")
514
+
515
+ grid_sizes = self._get_downsampled_grid_sizes(image_inputs)
516
+ text_inputs = {"input_ids": [], "labels": []}
517
+ sample_types_list = []
518
+ image_idx = 0
519
+
520
+ for message_idx, message in enumerate(conversation):
521
+ prompt = self.apply_chat_template(
522
+ [message],
523
+ tokenize=False,
524
+ add_generation_prompt=False,
525
+ **output_kwargs["chat_template_kwargs"],
526
+ )
527
+ prompt_chunks = prompt.split(DEFAULT_IMAGE_TOKEN)
528
+ prompt = []
529
+ for chunk_idx in range(len(prompt_chunks) - 1):
530
+ prompt.append(prompt_chunks[chunk_idx])
531
+ num_tokens = self._get_visual_seq_len(grid_sizes[image_idx])
532
+ prompt.append(DEFAULT_IMAGE_TOKEN * num_tokens)
533
+ image_idx += 1
534
+ prompt.append(prompt_chunks[-1])
535
+ prompt = "".join(prompt)
536
+
537
+ # TODO: support attention_mask, position_ids, etc.
538
+ input_ids = self.tokenizer.encode(prompt, return_tensors="pt", **output_kwargs["text_kwargs"])[0]
539
+ text_inputs["input_ids"].append(input_ids)
540
+
541
+ targets = torch.full_like(input_ids, IGNORE_INDEX)
542
+ sample_types = torch.full_like(input_ids, IGNORE_INDEX)
543
+ if message["role"] == "assistant":
544
+ targets[self.generation_prompt_length:-1] = input_ids[self.generation_prompt_length:-1].clone()
545
+ # elif message["role"] == "stream":
546
+ # diff = torch.diff((input_ids == self.image_token_id).float())
547
+ # image_end_indices = torch.nonzero(diff < 0)[:, 0]
548
+ # targets[image_end_indices + 1] = input_ids[image_end_indices + 1]
549
+ # sample_types = targets.clone()
550
+ # sample_types[torch.logical_and(sample_types > 0, sample_types != self.eos_token_id)] = 0
551
+ # targets[-2] = input_ids[-2] # <|im_end|>
552
+
553
+ if message_idx > 0 and conversation[message_idx - 1]["role"] == "stream":
554
+ targets[0] = input_ids[0]
555
+ # TODO: consider non-special tokens
556
+ sample_types[0] = input_ids[0]
557
+
558
+ text_inputs["labels"].append(targets)
559
+ sample_types_list.append(sample_types)
560
+
561
+ # Negative sampling for streaming data
562
+ text_inputs = {k: torch.cat(v) for k, v in text_inputs.items()}
563
+ sample_types = torch.cat(sample_types_list)
564
+ types, counts = torch.unique(sample_types[sample_types > -1], return_counts=True)
565
+
566
+ if len(types) > 0:
567
+ target_num_samples = counts.amin()
568
+ for type_id, type_count in zip(types, counts):
569
+ if type_count > target_num_samples:
570
+ indices = torch.nonzero(sample_types == type_id)[:, 0]
571
+ random_selector = torch.randperm(indices.size(0))[:-target_num_samples]
572
+ text_inputs["labels"][indices[random_selector]] = IGNORE_INDEX
573
+ # sample_types[indices[random_selector]] = -1
574
+
575
+ assert len(grid_sizes) == image_idx, "Number of images does not match the number of image tokens in the text."
576
+
577
+ return text_inputs
578
+
579
+ def _process_conversation_without_label(
580
+ self,
581
+ conversation: Conversation,
582
+ image_inputs: Dict[str, Any],
583
+ **kwargs,
584
+ ):
585
+ output_kwargs = self._merge_kwargs(
586
+ Videollama3Qwen2ProcessorKwargs,
587
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
588
+ **kwargs,
589
+ )
590
+ prompt = self.apply_chat_template(
591
+ conversation,
592
+ tokenize=False,
593
+ **output_kwargs["chat_template_kwargs"],
594
+ )
595
+ return self.process_text(prompt, image_inputs, **output_kwargs["text_kwargs"])
596
+
597
+ def _process_conversation(
598
+ self,
599
+ conversation: Conversation,
600
+ images: Optional[Union[BatchedImage, BatchedNamedImage]] = None,
601
+ return_labels: bool = False,
602
+ **kwargs: Unpack[Videollama3Qwen2ProcessorKwargs],
603
+ ) -> BatchFeature:
604
+ assert isinstance(conversation, list), "Conversation must be a list of messages."
605
+
606
+ if images is None:
607
+ conversation = self._load_multimodal_data(conversation)
608
+ images = self._gather_multimodal_data(conversation)
609
+
610
+ output_kwargs = self._merge_kwargs(
611
+ Videollama3Qwen2ProcessorKwargs,
612
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
613
+ **kwargs,
614
+ )
615
+
616
+ if images is not None:
617
+ image_inputs = self.process_images(images, **output_kwargs["images_kwargs"])
618
+ else:
619
+ image_inputs = {}
620
+
621
+ if return_labels:
622
+ text_inputs = self._process_conversation_with_label(conversation, image_inputs, **kwargs)
623
+ else:
624
+ text_inputs = self._process_conversation_without_label(conversation, image_inputs, **kwargs)
625
+
626
+ return BatchFeature(data={**text_inputs, **image_inputs})
627
+
628
+ def _process_plain(
629
+ self,
630
+ text: Union[TextInput, PreTokenizedInput] = None,
631
+ images: Optional[Union[BatchedImage, BatchedNamedImage]] = None,
632
+ return_labels: bool = False,
633
+ **kwargs: Unpack[Videollama3Qwen2ProcessorKwargs],
634
+ ) -> BatchFeature:
635
+ if text is None:
636
+ raise ValueError("You must provide 'text' or 'message'.")
637
+ if return_labels:
638
+ raise ValueError("return_labels is not supported for plain text processing.")
639
+
640
+ output_kwargs = self._merge_kwargs(
641
+ Videollama3Qwen2ProcessorKwargs,
642
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
643
+ **kwargs,
644
+ )
645
+
646
+ if images is not None:
647
+ image_inputs = self.process_images(images, **output_kwargs["images_kwargs"])
648
+ else:
649
+ image_inputs = {}
650
+
651
+ text_inputs = self.process_text(text, image_inputs, **output_kwargs["text_kwargs"])
652
+
653
+ return BatchFeature(data={**text_inputs, **image_inputs})
654
+
655
+ def process_images(self, images: Union[BatchedImage, BatchedNamedImage], **kwargs):
656
+ modals, images = make_batched_images(images)
657
+ if not "merge_size" in kwargs:
658
+ kwargs["merge_size"] = [
659
+ self.image_merge_size if modal == "image" else self.video_merge_size
660
+ for modal in modals
661
+ ]
662
+ image_inputs = self.image_processor(images=images, **kwargs)
663
+ image_inputs["modals"] = modals
664
+ return image_inputs
665
+
666
+ def process_text(
667
+ self,
668
+ text: TextInput,
669
+ image_inputs: Dict[str, Any],
670
+ **kwargs,
671
+ ):
672
+ grid_sizes = self._get_downsampled_grid_sizes(image_inputs)
673
+
674
+ kwargs.pop("padding")
675
+ kwargs.pop("padding_side")
676
+
677
+ image_idx = 0
678
+ while DEFAULT_IMAGE_TOKEN in text:
679
+ num_tokens = self._get_visual_seq_len(grid_sizes[image_idx])
680
+ text = text.replace(DEFAULT_IMAGE_TOKEN, "<placeholder>" * num_tokens, 1)
681
+ image_idx += 1
682
+ text = text.replace("<placeholder>", DEFAULT_IMAGE_TOKEN)
683
+
684
+ assert len(grid_sizes) == image_idx, "Number of images does not match the number of image tokens in the text."
685
+
686
+ text_inputs = self.tokenizer(text, **kwargs)
687
+ return text_inputs
688
+
689
+ def __call__(
690
+ self,
691
+ text: Optional[TextInput] = None,
692
+ conversation: Optional[Conversation] = None,
693
+ images: Optional[Union[BatchedImage, BatchedNamedImage]] = None,
694
+ return_labels: bool = False,
695
+ **kwargs: Unpack[Videollama3Qwen2ProcessorKwargs],
696
+ ) -> BatchFeature:
697
+ if conversation is not None:
698
+ if text is not None:
699
+ raise ValueError("You cannot provide 'message' with 'text'.")
700
+ return self._process_conversation(conversation, images, return_labels, **kwargs)
701
+ return self._process_plain(text, images, return_labels, **kwargs)
702
+
703
+ def batch_decode(self, *args, **kwargs):
704
+ return self.tokenizer.batch_decode(*args, **kwargs)
705
+
706
+ def decode(self, *args, **kwargs):
707
+ return self.tokenizer.decode(*args, **kwargs)
708
+
709
+ def apply_chat_template(
710
+ self,
711
+ conversation: Conversation,
712
+ chat_template: Optional[str] = None,
713
+ tokenize: bool = False,
714
+ add_system_prompt: bool = False,
715
+ add_generation_prompt: bool = False,
716
+ image_token: Optional[str] = DEFAULT_IMAGE_TOKEN,
717
+ **kwargs,
718
+ ) -> str:
719
+ """
720
+ Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input
721
+ conversations to turn them into a single tokenizable string.
722
+
723
+ Args:
724
+ conversation (`List[Dict, str, str]`):
725
+ The conversation to format.
726
+ chat_template (`Optional[str]`, *optional*):
727
+ The Jinja template to use for formatting the conversation. If not provided, the tokenizer's
728
+ chat template is used.
729
+ tokenize (`bool`, *optional*, defaults to `False`):
730
+ Whether to tokenize the output or not.
731
+ add_system_prompt (`bool`, *optional*, defaults to `False`):
732
+ Whether to add the system prompt to the output or not.
733
+ add_generation_prompt (`bool`, *optional*, defaults to `False`):
734
+ Whether to add the generation prompt to the output or not.
735
+ image_token (`Optional[str]`, *optional*, defaults to `<image>`):
736
+ The token to use for indicating images in the conversation.
737
+ **kwargs:
738
+ Additional keyword arguments
739
+ """
740
+
741
+ if chat_template is None:
742
+ if self.chat_template is not None:
743
+ chat_template = self.chat_template
744
+ else:
745
+ raise ValueError(
746
+ "No chat template is set for this processor. Please either set the `chat_template` attribute, "
747
+ "or provide a chat template as an argument. See "
748
+ "https://huggingface.co/docs/transformers/main/en/chat_templating for more information."
749
+ )
750
+ return self.tokenizer.apply_chat_template(
751
+ conversation,
752
+ chat_template=chat_template,
753
+ tokenize=tokenize,
754
+ add_system_prompt=add_system_prompt,
755
+ add_generation_prompt=add_generation_prompt,
756
+ image_token=image_token,
757
+ **kwargs
758
+ )
759
+
760
+ @property
761
+ def model_input_names(self):
762
+ tokenizer_input_names = self.tokenizer.model_input_names
763
+ image_processor_input_names = self.image_processor.model_input_names
764
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names)) + ["modals"]
765
+
766
+ # modified from transformers.ProcessorMixin
767
+ def _merge_kwargs(
768
+ self,
769
+ ModelProcessorKwargs: ProcessingKwargs,
770
+ tokenizer_init_kwargs: Optional[Dict] = None,
771
+ **kwargs,
772
+ ) -> Dict[str, Dict]:
773
+ """
774
+ Method to merge dictionaries of kwargs cleanly separated by modality within a Processor instance.
775
+ The order of operations is as follows:
776
+ 1) kwargs passed as before have highest priority to preserve BC.
777
+ ```python
778
+ high_priority_kwargs = {"crop_size" = {"height": 222, "width": 222}, "padding" = "max_length"}
779
+ processor(..., **high_priority_kwargs)
780
+ ```
781
+ 2) kwargs passed as modality-specific kwargs have second priority. This is the recommended API.
782
+ ```python
783
+ processor(..., text_kwargs={"padding": "max_length"}, images_kwargs={"crop_size": {"height": 222, "width": 222}}})
784
+ ```
785
+ 3) kwargs passed during instantiation of a modality processor have fourth priority.
786
+ ```python
787
+ tokenizer = tokenizer_class(..., {"padding": "max_length"})
788
+ image_processor = image_processor_class(...)
789
+ processor(tokenizer, image_processor) # will pass max_length unless overriden by kwargs at call
790
+ ```
791
+ 4) defaults kwargs specified at processor level have lowest priority.
792
+ ```python
793
+ class MyProcessingKwargs(ProcessingKwargs, CommonKwargs, TextKwargs, ImagesKwargs, total=False):
794
+ _defaults = {
795
+ "text_kwargs": {
796
+ "padding": "max_length",
797
+ "max_length": 64,
798
+ },
799
+ }
800
+ ```
801
+ Args:
802
+ ModelProcessorKwargs (`ProcessingKwargs`):
803
+ Typed dictionary of kwargs specifically required by the model passed.
804
+ tokenizer_init_kwargs (`Dict`, *optional*):
805
+ Dictionary of kwargs the tokenizer was instantiated with and need to take precedence over defaults.
806
+
807
+ Returns:
808
+ output_kwargs (`Dict`):
809
+ Dictionary of per-modality kwargs to be passed to each modality-specific processor.
810
+
811
+ """
812
+ # Initialize dictionaries
813
+ output_kwargs = {
814
+ "text_kwargs": {},
815
+ "images_kwargs": {},
816
+ "audio_kwargs": {},
817
+ "videos_kwargs": {},
818
+ "chat_template_kwargs": {},
819
+ "common_kwargs": {},
820
+ }
821
+
822
+ default_kwargs = {
823
+ "text_kwargs": {},
824
+ "images_kwargs": {},
825
+ "audio_kwargs": {},
826
+ "videos_kwargs": {},
827
+ "chat_template_kwargs": {},
828
+ "common_kwargs": {},
829
+ }
830
+
831
+ used_keys = set()
832
+
833
+ # get defaults from set model processor kwargs if they exist
834
+ for modality in default_kwargs:
835
+ default_kwargs[modality] = ModelProcessorKwargs._defaults.get(modality, {}).copy()
836
+ # update defaults with arguments from tokenizer init
837
+ for modality_key in ModelProcessorKwargs.__annotations__[modality].__annotations__.keys():
838
+ # init with tokenizer init kwargs if necessary
839
+ if modality_key in tokenizer_init_kwargs:
840
+ value = (
841
+ getattr(self.tokenizer, modality_key)
842
+ if hasattr(self.tokenizer, modality_key)
843
+ else tokenizer_init_kwargs[modality_key]
844
+ )
845
+ default_kwargs[modality][modality_key] = value
846
+ # now defaults kwargs are updated with the tokenizers defaults.
847
+ # pass defaults to output dictionary
848
+ output_kwargs.update(default_kwargs)
849
+
850
+ # update modality kwargs with passed kwargs
851
+ non_modality_kwargs = set(kwargs) - set(output_kwargs)
852
+ for modality in output_kwargs:
853
+ for modality_key in ModelProcessorKwargs.__annotations__[modality].__annotations__.keys():
854
+ # check if we received a structured kwarg dict or not to handle it correctly
855
+ if modality in kwargs:
856
+ kwarg_value = kwargs[modality].pop(modality_key, "__empty__")
857
+ # check if this key was passed as a flat kwarg.
858
+ if kwarg_value != "__empty__" and modality_key in non_modality_kwargs:
859
+ raise ValueError(
860
+ f"Keyword argument {modality_key} was passed two times:\n"
861
+ f"in a dictionary for {modality} and as a **kwarg."
862
+ )
863
+ elif modality_key in kwargs:
864
+ # we get a modality_key instead of popping it because modality-specific processors
865
+ # can have overlapping kwargs
866
+ kwarg_value = kwargs.get(modality_key, "__empty__")
867
+ else:
868
+ kwarg_value = "__empty__"
869
+ if kwarg_value != "__empty__":
870
+ output_kwargs[modality][modality_key] = kwarg_value
871
+ used_keys.add(modality_key)
872
+
873
+ # Determine if kwargs is a flat dictionary or contains nested dictionaries
874
+ if any(key in default_kwargs for key in kwargs):
875
+ # kwargs is dictionary-based, and some keys match modality names
876
+ for modality, subdict in kwargs.items():
877
+ if modality in default_kwargs:
878
+ for subkey, subvalue in subdict.items():
879
+ if subkey not in used_keys:
880
+ output_kwargs[modality][subkey] = subvalue
881
+ used_keys.add(subkey)
882
+ else:
883
+ # kwargs is a flat dictionary
884
+ for key in kwargs:
885
+ if key not in used_keys:
886
+ output_kwargs["common_kwargs"][key] = kwargs[key]
887
+
888
+ # all modality-specific kwargs are updated with common kwargs
889
+ for modality in output_kwargs:
890
+ output_kwargs[modality].update(output_kwargs["common_kwargs"])
891
+ return output_kwargs
processor_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_videollama3.Videollama3Qwen2Processor"
4
+ },
5
+ "fps": 1,
6
+ "image_merge_size": 1,
7
+ "max_frames": 128,
8
+ "processor_class": "Videollama3Qwen2Processor",
9
+ "video_merge_size": 2
10
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<image>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "151666": {
190
+ "content": "<|stream_start|>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "151667": {
198
+ "content": "<|stream_end|>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": true
204
+ }
205
+ },
206
+ "additional_special_tokens": [
207
+ "<|im_start|>",
208
+ "<|im_end|>",
209
+ "<|object_ref_start|>",
210
+ "<|object_ref_end|>",
211
+ "<|box_start|>",
212
+ "<|box_end|>",
213
+ "<|quad_start|>",
214
+ "<|quad_end|>",
215
+ "<|vision_start|>",
216
+ "<|vision_end|>",
217
+ "<|vision_pad|>",
218
+ "<|image_pad|>",
219
+ "<|video_pad|>"
220
+ ],
221
+ "auto_map": {
222
+ "AutoProcessor": "processing_videollama3.Videollama3Qwen2Processor"
223
+ },
224
+ "bos_token": null,
225
+ "chat_template": "\n{%- set identifier = 'im' %}\n{% for message in messages %}\n {% if add_system_prompt and loop.first and message['role'] != 'system' %}\n {{- '<|im_start|>system\nYou are VideoLLaMA3 created by Alibaba DAMO Academy, a helpful assistant to help people understand images and videos.<|im_end|>\n' -}}\n {% endif %}\n {% if message['role'] == 'stream' %}\n {% set identifier = 'stream' %}\n {% else %}\n {% set identifier = 'im' %}\n {% endif %}\n {{- '<|' + identifier + '_start|>' + message['role'] + '\n' -}}\n {% if message['content'] is string %}\n {{- message['content'] + '<|' + identifier + '_end|>\n' -}}\n {% else %}\n {% for content in message['content'] %}\n {% if content is string %}\n {{- content -}}\n {% elif content['type'] == 'text' or 'text' in content %}\n {{- content['text'] -}}\n {% elif content['type'] == 'image' or 'image' in content %}\n {% if 'timestamp' in content %}\n {{- 'Time ' + content['timestamp'] | round(1) | string + 's: ' -}}\n {% endif %}\n {{- image_token + '\n' -}}\n {% elif content['type'] == 'video' or 'video' in content %}\n {% for i in range(content['num_frames']) %}\n {% if 'timestamps' in content %}\n {{- 'Time ' + content['timestamps'][i] | round(1) | string + 's:' -}}\n {% endif %}\n {% if i < content['num_frames'] - 1 %}\n {{- image_token + ',' -}}\n {% else %}\n {{- image_token + '\n' -}}\n {% endif %}\n {% endfor %}\n {% endif %}\n {% endfor %}\n {% if identifier == 'stream' %}\n {{- '<|' + identifier + '_end|>' -}}\n {% else %}\n {{- '<|' + identifier + '_end|>\n' -}}\n {% endif %}\n {% endif %}\n{% endfor %}\n{% if add_generation_prompt %}\n {{- '<|im_start|>assistant\n' -}}\n{% endif %}\n",
226
+ "clean_up_tokenization_spaces": false,
227
+ "eos_token": "<|im_end|>",
228
+ "errors": "replace",
229
+ "model_max_length": 32768,
230
+ "pad_token": "<|endoftext|>",
231
+ "padding_side": "right",
232
+ "processor_class": "Videollama3Qwen2Processor",
233
+ "split_special_tokens": false,
234
+ "tokenizer_class": "Qwen2Tokenizer",
235
+ "unk_token": null
236
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff