Remek commited on
Commit
88b15a7
1 Parent(s): fcca711

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pl
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - finetuned
8
+ - gguf
9
+ inference: false
10
+ pipeline_tag: text-generation
11
+ base_model: speakleash/Bielik-11B-v2.1-Instruct
12
+ ---
13
+
14
+ <p align="center">
15
+ <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
16
+ </p>
17
+
18
+ # Bielik-11B-v2.0-Instruct-GGUF
19
+
20
+ This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.1-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct).
21
+
22
+ <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br>
23
+
24
+ ### Available quantization formats:
25
+ * **q4_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
26
+ * **q5_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
27
+ * **q6_k:** Uses Q8_K for all tensors
28
+ * **q8_0:** Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
29
+
30
+ ### Ollama Modfile
31
+ The GGUF file can be used with [Ollama](https://ollama.com/). To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-11B-v2.1-Instruct.Q4_K_M.gguf (full path to model location) Modfile looks like:
32
+
33
+ ```
34
+ FROM ./Bielik-11B-v2.1-Instruct.Q4_K_M.gguf
35
+
36
+ TEMPLATE """<s>{{ if .System }}<|start_header_id|>system<|end_header_id|>
37
+
38
+ {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
39
+
40
+ {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
41
+
42
+ {{ .Response }}<|eot_id|>"""
43
+
44
+ PARAMETER stop "<|start_header_id|>"
45
+ PARAMETER stop "<|end_header_id|>"
46
+ PARAMETER stop "<|eot_id|>"
47
+
48
+ # Remeber to set low temperature for experimental models (1-3bits)
49
+ PARAMETER temperature 0.1
50
+
51
+ ```
52
+
53
+
54
+ ### Model description:
55
+
56
+ * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
57
+ * **Language:** Polish
58
+ * **Model type:** causal decoder-only
59
+ * **Quant from:** [Bielik-11B-v2.1-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct)
60
+ * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2)
61
+ * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
62
+
63
+ ### About GGUF
64
+
65
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023.
66
+
67
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
68
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
69
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
70
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
71
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
72
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration
73
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
74
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
75
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
76
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
77
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models.
78
+
79
+ ### Responsible for model quantization
80
+ * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.
81
+
82
+ ## Contact Us
83
+
84
+ If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).