nielsr HF staff commited on
Commit
45d542e
·
verified ·
1 Parent(s): a396fdd

Add metadata, link to paper page

Browse files

This PR adds the `pipeline_tag` and `library_name` to the model card. This ensures the model can be found on https://huggingface.co./models?pipeline_tag=depth-estimation.
It also links to the paper page on Hugging Face.

Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -1,3 +1,31 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: pytorch
4
+ pipeline_tag: depth-estimation
5
+ ---
6
+
7
+ # Video Depth Anything
8
+
9
+ This repository contains the model described in [Video Depth Anything: Consistent Depth Estimation for Super-Long Videos](https://huggingface.co/papers/2501.12375).
10
+
11
+ Project Page: https://videodepthanything.github.io
12
+
13
+ ## About
14
+ This model is based on [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2), and can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability. Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher consistent depth accuracy.
15
+
16
+ ## Usage
17
+ ```bash
18
+ git clone https://github.com/DepthAnything/Video-Depth-Anything
19
+ cd Video-Depth-Anything
20
+ pip install -r requirements.txt
21
+ ```
22
+
23
+ Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory.
24
+ ```bash
25
+ bash get_weights.sh
26
+ ```
27
+
28
+ ### Inference a video
29
+ ```bash
30
+ python3 run.py --input_video ./assets/example_videos/davis_rollercoaster.mp4 --output_dir ./outputs --encoder vitl
31
+ ```