Warning while inference

#3
by bh4 - opened

I am getting the following warning while performing inference(using transformers) following the guide provide in README-
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.

Generation seems to be okay. But if passing 'attention mask' improved the result, I would surely like to do so. How should I pass in the 'attention mask'?

Sign up or log in to comment