wsxiaoys commited on
Commit
298b0ea
1 Parent(s): 40edf47

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bigcode/starcoderbase-1b
3
+ datasets:
4
+ - bigcode/the-stack-dedup
5
+ library_name: transformers
6
+ license: bigcode-openrail-m
7
+ metrics:
8
+ - code_eval
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - code
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ inference: true
15
+ widget:
16
+ - text: 'def print_hello_world():'
17
+ example_title: Hello world
18
+ group: Python
19
+ extra_gated_prompt: "## Model License Agreement\nPlease read the BigCode [OpenRAIL-M\
20
+ \ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)\
21
+ \ agreement before accepting it.\n "
22
+ extra_gated_fields:
23
+ ? I accept the above license agreement, and will use the Model complying with the
24
+ set of use restrictions and sharing requirements
25
+ : checkbox
26
+ duplicated_from: bigcode-data/starcoderbase-1b
27
+ model-index:
28
+ - name: StarCoderBase-1B
29
+ results:
30
+ - task:
31
+ type: text-generation
32
+ dataset:
33
+ name: HumanEval
34
+ type: openai_humaneval
35
+ metrics:
36
+ - type: pass@1
37
+ value: 15.17
38
+ name: pass@1
39
+ verified: false
40
+ - task:
41
+ type: text-generation
42
+ dataset:
43
+ name: MultiPL-HumanEval (C++)
44
+ type: nuprl/MultiPL-E
45
+ metrics:
46
+ - type: pass@1
47
+ value: 11.68
48
+ name: pass@1
49
+ verified: false
50
+ - type: pass@1
51
+ value: 14.2
52
+ name: pass@1
53
+ verified: false
54
+ - type: pass@1
55
+ value: 13.38
56
+ name: pass@1
57
+ verified: false
58
+ - type: pass@1
59
+ value: 9.94
60
+ name: pass@1
61
+ verified: false
62
+ - type: pass@1
63
+ value: 12.52
64
+ name: pass@1
65
+ verified: false
66
+ - type: pass@1
67
+ value: 10.24
68
+ name: pass@1
69
+ verified: false
70
+ - type: pass@1
71
+ value: 3.92
72
+ name: pass@1
73
+ verified: false
74
+ - type: pass@1
75
+ value: 11.31
76
+ name: pass@1
77
+ verified: false
78
+ - type: pass@1
79
+ value: 5.37
80
+ name: pass@1
81
+ verified: false
82
+ ---
83
+
84
+ # wsxiaoys/starcoderbase-1b-Q2_K-GGUF
85
+ This model was converted to GGUF format from [`bigcode/starcoderbase-1b`](https://huggingface.co/bigcode/starcoderbase-1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
86
+ Refer to the [original model card](https://huggingface.co/bigcode/starcoderbase-1b) for more details on the model.
87
+
88
+ ## Use with llama.cpp
89
+ Install llama.cpp through brew (works on Mac and Linux)
90
+
91
+ ```bash
92
+ brew install llama.cpp
93
+
94
+ ```
95
+ Invoke the llama.cpp server or the CLI.
96
+
97
+ ### CLI:
98
+ ```bash
99
+ llama-cli --hf-repo wsxiaoys/starcoderbase-1b-Q2_K-GGUF --hf-file starcoderbase-1b-q2_k.gguf -p "The meaning to life and the universe is"
100
+ ```
101
+
102
+ ### Server:
103
+ ```bash
104
+ llama-server --hf-repo wsxiaoys/starcoderbase-1b-Q2_K-GGUF --hf-file starcoderbase-1b-q2_k.gguf -c 2048
105
+ ```
106
+
107
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
108
+
109
+ Step 1: Clone llama.cpp from GitHub.
110
+ ```
111
+ git clone https://github.com/ggerganov/llama.cpp
112
+ ```
113
+
114
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
115
+ ```
116
+ cd llama.cpp && LLAMA_CURL=1 make
117
+ ```
118
+
119
+ Step 3: Run inference through the main binary.
120
+ ```
121
+ ./llama-cli --hf-repo wsxiaoys/starcoderbase-1b-Q2_K-GGUF --hf-file starcoderbase-1b-q2_k.gguf -p "The meaning to life and the universe is"
122
+ ```
123
+ or
124
+ ```
125
+ ./llama-server --hf-repo wsxiaoys/starcoderbase-1b-Q2_K-GGUF --hf-file starcoderbase-1b-q2_k.gguf -c 2048
126
+ ```