Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,54 @@ size_categories:
|
|
15 |
ANAKIN is a dataset of mANipulated videos and mAsK annotatIoNs.
|
16 |
To our best knowledge, ANAKIN is the first real-world dataset of professionally edited video clips,
|
17 |
paired with source videos, edit descriptions and binary mask annotations of the edited regions.
|
18 |
-
ANAKIN consists of
|
19 |
[VideoSham](https://github.com/adobe-research/VideoSham-dataset)
|
20 |
-
dataset plus
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
ANAKIN is a dataset of mANipulated videos and mAsK annotatIoNs.
|
16 |
To our best knowledge, ANAKIN is the first real-world dataset of professionally edited video clips,
|
17 |
paired with source videos, edit descriptions and binary mask annotations of the edited regions.
|
18 |
+
ANAKIN consists of 1023 videos in total, including 352 edited videos from the
|
19 |
[VideoSham](https://github.com/adobe-research/VideoSham-dataset)
|
20 |
+
dataset plus 671 new videos collected from the Vimeo platform.
|
21 |
+
|
22 |
+
## Data Format
|
23 |
+
| Label | Description |
|
24 |
+
|----------|-------------------------------------------------------------------------------|
|
25 |
+
| video-id | Video ID |
|
26 |
+
|full* | Full length original video |
|
27 |
+
|trimmed | Short clip trimmed from `full` |
|
28 |
+
|edited| Manipulated version of `trimmed`|
|
29 |
+
|masks*| Per-frame binary masks, annotating the manipulation|
|
30 |
+
| start-time* | Trim beginning time (in seconds) |
|
31 |
+
| end-time* | Trim end time (in seconds) |
|
32 |
+
| task | Task given to the video editor |
|
33 |
+
|manipulation-type| One of the 5 manipulation types: splicing, inpainting, swap, audio, frame-level |
|
34 |
+
| editor-id | Editor ID |
|
35 |
+
|
36 |
+
*There are several subset configurations available.
|
37 |
+
The choice depends on whether you need to download full length videos and/or you only need the videos with masks available.
|
38 |
+
`start-time` and `end-time` will be returned for subset configs with full videos in them.
|
39 |
+
| config | full | masks | train/val/test |
|
40 |
+
| ---------- | ---- | ----- | -------------- |
|
41 |
+
| all | yes | maybe | 681/98/195 |
|
42 |
+
| no-full | no | maybe | 716/102/205 |
|
43 |
+
| has-masks | no | yes | 297/43/85 |
|
44 |
+
| full-masks | yes | yes | 297/43/85 |
|
45 |
+
|
46 |
+
|
47 |
+
## Example
|
48 |
+
The data can either be downloaded or [streamed](https://huggingface.co/docs/datasets/stream).
|
49 |
+
|
50 |
+
|
51 |
+
```python
|
52 |
+
from datasets import load_dataset
|
53 |
+
from torchvision.io import read_video
|
54 |
+
|
55 |
+
config = 'no-full' # ['all', 'no-full', 'has-masks', 'full-masks']
|
56 |
+
# Either download or stream the files
|
57 |
+
streaming = True
|
58 |
+
if streaming:
|
59 |
+
dataset = load_dataset("AlexBlck/ANAKIN", streaming=True)
|
60 |
+
else:
|
61 |
+
dataset = load_dataset("AlexBlck/ANAKIN", config, nproc=8)
|
62 |
+
|
63 |
+
for sample in dataset['train']: # ['train', 'validation', 'test']
|
64 |
+
trimmed_video, trimmed_audio, _ = read_video(sample['trimmed'], output_format="TCHW")
|
65 |
+
edited_video, edited_audio, _ = read_video(sample['edited'], output_format="TCHW")
|
66 |
+
masks = sample['masks']
|
67 |
+
print(sample.keys())
|
68 |
+
```
|