Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,8 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
This repository contains the models described in the following paper:
|
5 |
+
|
6 |
+
Orhan AE (2024) HVM-1: [Large-scale video models pretrained with nearly 5000 hours of human-like video data.](https://arxiv.org/abs/2407.18067) arXiv:2407.18067.
|
7 |
+
|
8 |
+
These models were pretrained with the spatiotemporal MAE algorithm on ~5k hours of curated human-like video data (mostly egocentric, temporally extended, continuous video recordings) and then, optionally, finetuned on various downstream tasks with few-shot supervised training.
|