bizarre results

#22
by vulcanoid - opened

So I ran this flan-t5-large in google collab and also downloaded it and ran locally just to make sure. I am getting bizarre results. For example:

pipeline('can clinton have a conversation with george washington? give the rationale how you came to the conclusion')
[{'generated_text': 'Hillary Clinton is a woman. George Washington was a man. The answer is no.'}]

pipeline('can bill clinton have a conversation with george washington? give the rationale how you came to the conclusion')
[{'generated_text': 'Bill Clinton was born in the United States. George Washington was born in the United Kingdom. The answer is no.'}]

Any insights into this?

Thanks
Vulcanoid

So I ran this flan-t5-large in google collab and also downloaded it and ran locally just to make sure. I am getting bizarre results. For example:

pipeline('can clinton have a conversation with george washington? give the rationale how you came to the conclusion')
[{'generated_text': 'Hillary Clinton is a woman. George Washington was a man. The answer is no.'}]

pipeline('can bill clinton have a conversation with george washington? give the rationale how you came to the conclusion')
[{'generated_text': 'Bill Clinton was born in the United States. George Washington was born in the United Kingdom. The answer is no.'}]

Any insights into this?

Thanks
Vulcanoid

It's a tiny model with less that 1B params. That explains it.

Sign up or log in to comment