TheBloke commited on
Commit
ca57a28
1 Parent(s): cff0b07

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -12,6 +12,20 @@ model-index:
12
  model_creator: Bram Vanroy
13
  model_name: Llama 2 13B Chat Dutch
14
  model_type: llama
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  quantized_by: TheBloke
16
  tags:
17
  - generated_from_trainer
@@ -68,6 +82,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
68
  <!-- repositories-available start -->
69
  ## Repositories available
70
 
 
71
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ)
72
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF)
73
  * [Bram Vanroy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
 
12
  model_creator: Bram Vanroy
13
  model_name: Llama 2 13B Chat Dutch
14
  model_type: llama
15
+ prompt_template: '[INST] <<SYS>>
16
+
17
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as
18
+ possible, while being safe. Your answers should not include any harmful, unethical,
19
+ racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
20
+ are socially unbiased and positive in nature. If a question does not make any sense,
21
+ or is not factually coherent, explain why instead of answering something not correct.
22
+ If you don''t know the answer to a question, please don''t share false information.
23
+
24
+ <</SYS>>
25
+
26
+ {prompt}[/INST]
27
+
28
+ '
29
  quantized_by: TheBloke
30
  tags:
31
  - generated_from_trainer
 
82
  <!-- repositories-available start -->
83
  ## Repositories available
84
 
85
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-AWQ)
86
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ)
87
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF)
88
  * [Bram Vanroy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)