FantasiaFoundry's picture
Update README.md
e086b09 verified
|
raw
history blame
No virus
829 Bytes
metadata
license: cc-by-nc-4.0

Simple python script to generate various GGUF-Imatrix quantizations from a Hugging Face author/model input, for Windows and NVIDIA hardware.

This is setup for a Windows machine with 8GB of VRAM. If you want to change the the -ngl (number of GPU layers) amount, you can do so at line 120. This is only relevant during the --imatrix data generation. If you don't have enough VRAM you can decrease the -ngl amount or set it to 0 to only use your System RAM instead for all layers.

Your imatrix.txt is expected to be located inside the imatrix folder. Included file is considered a good option, this discussion is where it came from.

Requirements:

  • Python 3.11
    • pip install huggingface_hub