OPEA
/

Safetensors
llama
4-bit precision
intel/auto-round
wenhuach commited on
Commit
417447c
·
1 Parent(s): cb38f4a

change to fp16

Browse files

Signed-off-by: wenhuach <[email protected]>

Files changed (1) hide show
  1. config.json +1 -1
config.json CHANGED
@@ -51,7 +51,7 @@
51
  "rope_scaling": null,
52
  "rope_theta": 500000.0,
53
  "tie_word_embeddings": false,
54
- "torch_dtype": "bfloat16",
55
  "transformers_version": "4.47.0",
56
  "use_cache": true,
57
  "vocab_size": 128256
 
51
  "rope_scaling": null,
52
  "rope_theta": 500000.0,
53
  "tie_word_embeddings": false,
54
+ "torch_dtype": "float16",
55
  "transformers_version": "4.47.0",
56
  "use_cache": true,
57
  "vocab_size": 128256