Lewdiculous commited on
Commit
cc67229
·
verified ·
1 Parent(s): e271bba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -11,15 +11,16 @@ tags:
11
  > Version (**v2**) files added! With imatrix data generated from the FP16 and conversions directly from the BF16. <br>
12
  > Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately. <br>
13
  > If you are able to test them and notice any issues let me know in the discussions.
14
- >
 
15
  > **Relevant:** <br>
16
  > These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
17
  > Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
18
 
19
  > [!TIP]
20
- > I apologize for disrupting your experience.
21
  > My upload speeds have been cooked and unstable lately. <br>
22
- > If you **want** and you are able to... <br>
23
  > You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
24
 
25
  GGUF-IQ-Imatrix quants for [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS).
 
11
  > Version (**v2**) files added! With imatrix data generated from the FP16 and conversions directly from the BF16. <br>
12
  > Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately. <br>
13
  > If you are able to test them and notice any issues let me know in the discussions.
14
+
15
+ > [!IMPORTANT]
16
  > **Relevant:** <br>
17
  > These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
18
  > Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
19
 
20
  > [!TIP]
21
+ > I apologize for disrupting your experience. <br>
22
  > My upload speeds have been cooked and unstable lately. <br>
23
+ > If you **want** and you are **able to**... <br>
24
  > You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
25
 
26
  GGUF-IQ-Imatrix quants for [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS).