mtgv commited on
Commit
6dc0a2f
1 Parent(s): 211026e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -8,6 +8,7 @@ metrics:
8
  - mIoU
9
  pipeline_tag: image-classification
10
  ---
 
11
  # VisionLLaMA-Base-MAE
12
 
13
  With the Masked Autoencoders' paradigm, VisionLLaMA-Base-MAE model is trained on ImageNet-1k without labels. It manifests substantial improvements over classification tasks (SFT, linear probing) on ImageNet-1K and the segmentation task on ADE20K.
@@ -18,8 +19,17 @@ With the Masked Autoencoders' paradigm, VisionLLaMA-Base-MAE model is trained on
18
  | VisionLLaMA-Base-MAE (ep1600) |84.3 | 71.7| 50.2 |
19
 
20
 
 
21
 
 
22
 
23
- # How to Use
24
 
25
- Please refer the [Github](https://github.com/Meituan-AutoML/VisionLLaMA) page for usage.
 
 
 
 
 
 
 
 
8
  - mIoU
9
  pipeline_tag: image-classification
10
  ---
11
+
12
  # VisionLLaMA-Base-MAE
13
 
14
  With the Masked Autoencoders' paradigm, VisionLLaMA-Base-MAE model is trained on ImageNet-1k without labels. It manifests substantial improvements over classification tasks (SFT, linear probing) on ImageNet-1K and the segmentation task on ADE20K.
 
19
  | VisionLLaMA-Base-MAE (ep1600) |84.3 | 71.7| 50.2 |
20
 
21
 
22
+ # How to Use
23
 
24
+ Please refer the [Github](https://github.com/Meituan-AutoML/VisionLLaMA) page for usage.
25
 
26
+ # Citation
27
 
28
+ ```
29
+ @article{chu2024visionllama,
30
+ title={VisionLLaMA: A Unified LLaMA Interface for Vision Tasks},
31
+ author={Chu, Xiangxiang and Su, Jianlin and Zhang, Bo and Shen, Chunhua},
32
+ journal={arXiv preprint arXiv:2403.00522},
33
+ year={2024}
34
+ }
35
+ ```