eaddario commited on
Commit
6de76a4
·
verified ·
1 Parent(s): b3920d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -7,7 +7,7 @@ language:
7
  - en
8
  license:
9
  - apache-2.0
10
- pipeline_tag: text-classification
11
  tags:
12
  - gguf
13
  - quant
@@ -17,7 +17,7 @@ tags:
17
 
18
  Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
19
 
20
- Original model: [watt-ai/watt-tool-8B](watt-ai/watt-tool-8B)
21
 
22
  All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
23
 
@@ -70,4 +70,4 @@ I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore
70
 
71
  ## Credits
72
 
73
- A big **Thank You!** to [Colin Kealty](https://huggingface.co/bartowski) for the many contributions and for being one of the best sources of high quality quantized models available in Hugginface, and a really big ***Thank You!*** to [Georgi Gerganov](https://github.com/ggerganov) for his amazing work with [llama.cpp](https://github.com/ggerganov/llama.cpp) and the [gguf](https://huggingface.co/docs/hub/en/gguf) file format.
 
7
  - en
8
  license:
9
  - apache-2.0
10
+ pipeline_tag: text-generation
11
  tags:
12
  - gguf
13
  - quant
 
17
 
18
  Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
19
 
20
+ Original model: [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B)
21
 
22
  All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
23
 
 
70
 
71
  ## Credits
72
 
73
+ A big **Thank You!** to [Colin Kealty](https://huggingface.co/bartowski) for the many contributions and for being one of the best sources of high quality quantized models available in Hugginface, and a really big ***Thank You!*** to [Georgi Gerganov](https://github.com/ggerganov) for his amazing work with [llama.cpp](https://github.com/ggerganov/llama.cpp) and the [gguf](https://huggingface.co/docs/hub/en/gguf) file format.