kelechi commited on
Commit
cff5df0
1 Parent(s): 523fc48

added model card

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -32,8 +32,10 @@ For example, assuming we want to finetune this model on a token classification t
32
 
33
  ```python
34
  >>> from transformers import AutoTokenizer, AutoModelForTokenClassification
35
- >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_small")
36
  >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_small")
 
 
 
37
  ```
38
 
39
  #### Limitations and bias
 
32
 
33
  ```python
34
  >>> from transformers import AutoTokenizer, AutoModelForTokenClassification
 
35
  >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_small")
36
+ >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_small")
37
+ # we have to manually set the model max length because it is an imported sentencepiece model which hugginface does not properly support right now
38
+ >>> tokenizer.model_max_length = 512
39
  ```
40
 
41
  #### Limitations and bias