bunnycore commited on
Commit
b433bda
·
verified ·
1 Parent(s): 3835acb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bunnycore/Llama-3.2-3B-R1-lora
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - llama
8
+ - trl
9
+ - llama-cpp
10
+ - gguf-my-lora
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ datasets:
15
+ - open-thoughts/OpenThoughts-114k
16
+ - bespokelabs/Bespoke-Stratos-17k
17
+ ---
18
+
19
+ # bunnycore/Llama-3.2-3B-R1-lora-F16-GGUF
20
+ This LoRA adapter was converted to GGUF format from [`bunnycore/Llama-3.2-3B-R1-lora`](https://huggingface.co/bunnycore/Llama-3.2-3B-R1-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
21
+ Refer to the [original adapter repository](https://huggingface.co/bunnycore/Llama-3.2-3B-R1-lora) for more details.
22
+
23
+ ## Use with llama.cpp
24
+
25
+ ```bash
26
+ # with cli
27
+ llama-cli -m base_model.gguf --lora Llama-3.2-3B-R1-lora-f16.gguf (...other args)
28
+
29
+ # with server
30
+ llama-server -m base_model.gguf --lora Llama-3.2-3B-R1-lora-f16.gguf (...other args)
31
+ ```
32
+
33
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).