iproskurina commited on
Commit
7535dd3
1 Parent(s): 6ae06d4

AutoGPTQ model for mistralai/Mistral-7B-v0.3: 4bits, gr128, desc_act=False

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,18 +1,18 @@
1
  ---
 
2
  language:
3
  - en
4
- base_model: mistralai/Mistral-7B-v0.3
5
- inference: false
6
  license: apache-2.0
7
- model_creator: Mistral AI
8
  model_name: Mistral 7B v0.3
9
- model_type: mistral
10
  pipeline_tag: text-generation
11
- prompt_template: '{prompt}'
12
- quantized_by: iproskurina
13
  tags:
14
  - gptq
15
  - 4-bit
 
 
 
 
 
16
  base_model_relation: quantized
17
  ---
18
 
 
1
  ---
2
+ base_model: mistralai/Mistral-7B-v0.3
3
  language:
4
  - en
 
 
5
  license: apache-2.0
 
6
  model_name: Mistral 7B v0.3
 
7
  pipeline_tag: text-generation
 
 
8
  tags:
9
  - gptq
10
  - 4-bit
11
+ inference: false
12
+ model_creator: Mistral AI
13
+ model_type: mistral
14
+ prompt_template: '{prompt}'
15
+ quantized_by: iproskurina
16
  base_model_relation: quantized
17
  ---
18