|
--- |
|
language: en |
|
widget: |
|
- text: ' brown dog fox jumped lazy over quick the the ' |
|
datasets: |
|
- 'stas/c4-en-10k' |
|
--- |
|
|
|
# T5-deshuffle |
|
|
|
Bag Of Words (BOW) is a simple and typical encoding for making statistical models discover patterns in language |
|
However BOW is a lossy compression that eliminates a very important feature of text: order |
|
|
|
This model is trained to learn the most probable order of an unordered token sequence, |
|
using a subset of the c4 dataset, and can thus be seen as a "bag-of-words decoder". |
|
|
|
Currently, it does not perform well. I'm planning to re-train on a larger subset of c4 later (after may). |
|
|
|
How to run: |
|
```python |
|
from transformers import T5ForConditionalGeneration, T5Tokenizer |
|
|
|
tokenizer = T5Tokenizer.from_pretrained("marksverdhei/t5-deshuffle") |
|
model = T5ForConditionalGeneration.from_pretrained("marksverdhei/t5-deshuffle") |
|
|
|
prompt = ' brown dog fox jumped lazy over quick the the ' |
|
|
|
ids = tokenizer(prompt, return_tensors="pt").input_ids |
|
generated_tokens, = model.generate(ids) |
|
print(tokenizer.decode(generated_tokens, skip_special_tokens=True)) |
|
``` |