beomi commited on
Commit
1360c2c
·
verified ·
1 Parent(s): b668193

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - unsloth
5
+ - KoAlpaca
6
+ - Solar-Ko
7
+ - llama-cpp
8
+ - gguf-my-lora
9
+ license: apache-2.0
10
+ datasets:
11
+ - beomi/KoAlpaca-RealQA
12
+ language:
13
+ - ko
14
+ base_model: beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ # beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-Q8_0-GGUF
19
+ This LoRA adapter was converted to GGUF format from [`beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B`](https://huggingface.co/beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
20
+ Refer to the [original adapter repository](https://huggingface.co/beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B) for more details.
21
+
22
+ ## Use with llama.cpp
23
+
24
+ ```bash
25
+ # with cli
26
+ llama-cli -m base_model.gguf --lora KoAlpaca-RealQA-Solar-Ko-Recovery-11B-q8_0.gguf (...other args)
27
+
28
+ # with server
29
+ llama-server -m base_model.gguf --lora KoAlpaca-RealQA-Solar-Ko-Recovery-11B-q8_0.gguf (...other args)
30
+ ```
31
+
32
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).