Remove <sep> from tokenizer special tokens
#4
by
gabegoodhart
- opened
Description
The original granite-3.1-2b-instruct
model had these same errant <sep>
tokens that were removed after the initial upload. With these in place, the model cannot be converted to GGUF using convert_hf_to_gguf.py
. Error below:
...
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 131072
INFO:hf-to-gguf:gguf: embedding length = 2048
INFO:hf-to-gguf:gguf: feed forward length = 8192
INFO:hf-to-gguf:gguf: head count = 32
INFO:hf-to-gguf:gguf: key-value head count = 8
INFO:hf-to-gguf:gguf: rope theta = 5000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:gguf: (granite) attention_scale = 0.015625
INFO:hf-to-gguf:gguf: (granite) embedding_scale = 12.0
INFO:hf-to-gguf:gguf: (granite) residual_scale = 0.22
INFO:hf-to-gguf:gguf: (granite) logits_scale = 8.0
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 1569, in set_vocab
self._set_vocab_sentencepiece()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 792, in _set_vocab_sentencepiece
tokens, scores, toktypes = self._create_vocab_sentencepiece()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 809, in _create_vocab_sentencepiece
raise FileNotFoundError(f"File not found: {tokenizer_path}")
FileNotFoundError: File not found: granite-guardian-3.1-2b/tokenizer.model
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 1572, in set_vocab
self._set_vocab_llama_hf()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 884, in _set_vocab_llama_hf
vocab = gguf.LlamaHfVocab(self.dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/gguf-py/gguf/vocab.py", line 390, in __init__
raise FileNotFoundError('Cannot find Llama BPE tokenizer')
FileNotFoundError: Cannot find Llama BPE tokenizer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 5140, in <module>
main()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 5134, in main
model_instance.write()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 440, in write
self.prepare_metadata(vocab_only=False)
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 433, in prepare_metadata
self.set_vocab()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 1575, in set_vocab
self._set_vocab_gpt2()
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 728, in _set_vocab_gpt2
tokens, toktypes, tokpre = self.get_vocab_base()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/ghart/Projects/github/ggerganov/llama.cpp/convert_hf_to_gguf.py", line 524, in get_vocab_base
assert max(tokenizer.vocab.values()) < vocab_size
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
Thanks a lot, @gabegoodhart !
cc: @pronics2004
ink-pad
changed pull request status to
merged