Post
Up to 3x faster LLM generation with no extra resources/requirements - ngram speculation has landed in π€ transformers! ποΈπ¨
All you need to do is to add
How does it work? π€
Start with assisted generation, where a smaller model generates candidate sequences. The net result is a significant speedup if the model agrees with the candidate sequences! However, we do require a smaller model trained similarly π
The idea introduced (and implemented) by Apoorv Saxena consists of gathering the candidate sequences from the input text itself. If the latest generated ngram is in the input, use the continuation therein as a candidate! No smaller model is required while still achieving significant speedups π₯
In fact, the penalty of gathering and testing the candidates is so small that you should use this technique whenever possible!
Here is the code example that produces the outputs shown in the video: https://pastebin.com/bms6XtR4
Have fun π€
All you need to do is to add
prompt_lookup_num_tokens=10
to your generate
call, and you'll get faster LLMs π₯How does it work? π€
Start with assisted generation, where a smaller model generates candidate sequences. The net result is a significant speedup if the model agrees with the candidate sequences! However, we do require a smaller model trained similarly π
The idea introduced (and implemented) by Apoorv Saxena consists of gathering the candidate sequences from the input text itself. If the latest generated ngram is in the input, use the continuation therein as a candidate! No smaller model is required while still achieving significant speedups π₯
In fact, the penalty of gathering and testing the candidates is so small that you should use this technique whenever possible!
Here is the code example that produces the outputs shown in the video: https://pastebin.com/bms6XtR4
Have fun π€