Model sometimes prefers to speak in Chinese or Korean when prompted to translate to Japanese in English
Model sometimes prefers to speak in Chinese or Korean when prompted to translate to Japanese in English, It seems to happen at random with some things and not others, not sure why. Likely bad data in training causing some kind of bias, I think this may be issue with the base model itself, in this case llama 2. When this doesn't happen the translations seem fairly solid
Here is a example, I have reproduced both on the demo and a version running locally on CPU with GGUF
Sometimes it will correct itself if called out, but this doesn't always seem to be the case and is rather annoying
It seems speaking to it in Japanese first, and having it output in Japanese helps likely due to in context learning. But even then it fails a lot
Thank you for your feedback.
When we release a new version, we will try to resolve this issue.