itlwas commited on
Commit
c5b67ce
·
verified ·
1 Parent(s): 0f8490c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: lianghsun/Llama-3.2-Taiwan-3B
3
+ library_name: transformers
4
+ datasets:
5
+ - lianghsun/tw-novel-1.1B
6
+ - lianghsun/tw-finance-159M
7
+ - lianghsun/tw-legal-news-24M
8
+ - lianghsun/tw-gov-news-90M
9
+ - lianghsun/tw-gov-556k
10
+ - lianghsun/tw-news-551M
11
+ - lianghsun/tw-health-43M
12
+ - lianghsun/tw-science-24M
13
+ - lianghsun/tw-book-43M
14
+ - lianghsun/tw-society-88M
15
+ - lianghsun/tw-law-article-evolution
16
+ - lianghsun/tw-processed-judgments
17
+ - lianghsun/tw-legal-methodology
18
+ - lianghsun/tw-legal-qa
19
+ - lianghsun/tw-judgment-gist
20
+ - lianghsun/reasoning-base-20k
21
+ - lianghsun/wikipedia-zh-filtered
22
+ - AWeirdDev/zh-tw-pts-articles-sm
23
+ - bhxiang/c4_calibrate_mini
24
+ - benchang1110/pretrainedtw
25
+ - benchang1110/sciencetw
26
+ - intfloat/multilingual_cc_news
27
+ language:
28
+ - zh
29
+ - en
30
+ license: llama3.2
31
+ tags:
32
+ - ROC
33
+ - Taiwan
34
+ - zh-tw
35
+ - llama-factory
36
+ - llama-cpp
37
+ - gguf-my-repo
38
+ new_version: lianghsun/Llama-3.2-Taiwan-3B-Instruct
39
+ pipeline_tag: text-generation
40
+ widget:
41
+ - text: 中華民國憲法第一條
42
+ ---
43
+
44
+ # itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF
45
+ This model was converted to GGUF format from [`lianghsun/Llama-3.2-Taiwan-3B`](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
46
+ Refer to the [original model card](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) for more details on the model.
47
+
48
+ ## Use with llama.cpp
49
+ Install llama.cpp through brew (works on Mac and Linux)
50
+
51
+ ```bash
52
+ brew install llama.cpp
53
+
54
+ ```
55
+ Invoke the llama.cpp server or the CLI.
56
+
57
+ ### CLI:
58
+ ```bash
59
+ llama-cli --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
60
+ ```
61
+
62
+ ### Server:
63
+ ```bash
64
+ llama-server --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -c 2048
65
+ ```
66
+
67
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
68
+
69
+ Step 1: Clone llama.cpp from GitHub.
70
+ ```
71
+ git clone https://github.com/ggerganov/llama.cpp
72
+ ```
73
+
74
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
75
+ ```
76
+ cd llama.cpp && LLAMA_CURL=1 make
77
+ ```
78
+
79
+ Step 3: Run inference through the main binary.
80
+ ```
81
+ ./llama-cli --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
82
+ ```
83
+ or
84
+ ```
85
+ ./llama-server --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -c 2048
86
+ ```