Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
---
|
5 |
+
# VLGuard
|
6 |
+
[[Website]](https://ys-zong.github.io/VLGuard) [[Paper]](https://arxiv.org/abs/2402.02207) [[Code]](https://github.com/ys-zong/VLGuard)
|
7 |
+
|
8 |
+
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models. (ICML 2024)
|
9 |
+
|
10 |
+
|
11 |
+
## Weights
|
12 |
+
This is the model weight for LLaVA-v1.5-7B re-trained with LoRA after removing harmful samples in the training data. You can use them in exactly the same way as the original [LLaVA](https://github.com/haotian-liu/LLaVA/tree/main).
|
13 |
+
|
14 |
+
## Usage
|
15 |
+
|
16 |
+
Please refer to [Github](https://github.com/ys-zong/VLGuard) for detailed usage.
|