Upload gguf-imat-llama-3.py
#23
by
SolidSnacke
- opened
Here is the rewritten file for llama 3. Although gguf-imat and gguf-imat-llama-3 differ only in this line:
(line 111)subprocess.run(["python", convert_script, model_dir, "--outfile", gguf_model_path, "--outtype", "f16", "--vocab-type", "bpe"])
SolidSnacke
changed pull request status to
closed