Update README.md
Browse files
README.md
CHANGED
@@ -65,6 +65,12 @@ python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/Wizar
|
|
65 |
- https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py
|
66 |
(best for sharding and float16/FP16 or bfloat16/BF16 conversion)
|
67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
\*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors:
|
69 |
|
70 |
- https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2)
|
|
|
65 |
- https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py
|
66 |
(best for sharding and float16/FP16 or bfloat16/BF16 conversion)
|
67 |
|
68 |
+
Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards:
|
69 |
+
|
70 |
+
```
|
71 |
+
python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB
|
72 |
+
```
|
73 |
+
|
74 |
\*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors:
|
75 |
|
76 |
- https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2)
|