Fix proper attribution for model (#1)
Browse files- Fix proper attribution for model (3b48405d40f1e82b85c6160b7bf134ddb1d4ed3a)
Co-authored-by: rombo dawg <[email protected]>
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
library_name: transformers
|
4 |
license: apache-2.0
|
5 |
pipeline_tag: text-generation
|
@@ -10,7 +10,7 @@ quantized_by: bartowski
|
|
10 |
|
11 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization.
|
12 |
|
13 |
-
Original model: https://huggingface.co/
|
14 |
|
15 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
16 |
|
|
|
1 |
---
|
2 |
+
base_model: Rombos-LLM-V2.5-Qwen-0.5b
|
3 |
library_name: transformers
|
4 |
license: apache-2.0
|
5 |
pipeline_tag: text-generation
|
|
|
10 |
|
11 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization.
|
12 |
|
13 |
+
Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-0.5b
|
14 |
|
15 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
16 |
|