End-of-chapter quiz
1. What is the order of the language modeling pipeline?
2. How many dimensions does the tensor output by the base Transformer model have, and what are they?
3. Which of the following is an example of subword tokenization?
4. What is a model head?
5. What is an AutoModel?
6. What are the techniques to be aware of when batching sequences of different lengths together?
7. What is the point of applying a SoftMax function to the logits output by a sequence classification model?
8. What method is most of the tokenizer API centered around?
9. What does the result variable contain in this code sample?
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
result = tokenizer.tokenize("Hello!")
10. Is there something wrong with the following code?
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
model = AutoModel.from_pretrained("gpt2")
encoded = tokenizer("Hey!", return_tensors="pt")
result = model(**encoded)