tokenizer.get_vocab() method is not working
Hi team ,
There is an major issue with the tokenizer.get_vocab() method, as get_vocab() is a very important method that is used by a lot of codebases and libraries..for example ctranslate2
Having an issue over here renders the model useless for many usecases
Below are my logs and a possible fix
Thanks,
How to reproduce?
tokenizer = AutoTokenizer.from_pretrained("Salesforce/xgen-7b-8k-base",trust_remote_code=True)
tokenizer.get_vocab()
Possible Fix?
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
return self.encoder.decode_single_token_bytes(index).decode("utf-8")
return self.encoder.decode_single_token_bytes(index).decode("latin1")
Use latin1 encoding over here instead of "utf-8"
Error thrown?
UnicodeDecodeError Traceback (most recent call last)
Cell In[39], line 1
----> 1 tokenizer.get_vocab()["<|tiktoken|>"]
File ~/.cache/huggingface/modules/transformers_modules/Salesforce/xgen-7b-8k-base/3987e094377fae577bba039af1b300ee8086f9e1/tokenization_xgen.py:142, in XgenTokenizer.get_vocab(self)
140 def get_vocab(self):
141 """Returns vocab as a dict"""
--> 142 vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
143 return vocab
File ~/.cache/huggingface/modules/transformers_modules/Salesforce/xgen-7b-8k-base/3987e094377fae577bba039af1b300ee8086f9e1/tokenization_xgen.py:142, in (.0)
140 def get_vocab(self):
141 """Returns vocab as a dict"""
--> 142 vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
143 return vocab
File ~/.cache/huggingface/modules/transformers_modules/Salesforce/xgen-7b-8k-base/3987e094377fae577bba039af1b300ee8086f9e1/tokenization_xgen.py:158, in XgenTokenizer._convert_id_to_token(self, index)
156 def _convert_id_to_token(self, index):
157 """Converts an index (integer) in a token (str) using the vocab."""
--> 158 return self.encoder.decode_single_token_bytes(index).decode("utf-8")
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 0: invalid start byte
Fixed, you can try again!
Thanks, the get vocab is working now
But still when trying to convert via ctranslate2 I'm getting the error that the model vocab size and the tokenizer vocab size does not match.
Can you check once