Having bad results, how should i use this model?
I fired up llama.cpp server latest version, pasted the recommended system prompt. This didn't work at all, the model doesn't even acknowledge my prompt. I then tried with the "new ui" which allows for selectable prompt template and selected "llama3". Now it at least tried to respond to my prompt but makes too many mistakes for a modern 70b.
How do you guys use it?
Do you use llama-server, not "server"?
When i compile it, i get: NOTICE: The 'server' binary is deprecated. Please use 'llama-server' instead.
I tested it, with the correct system-prompt and the option --special, i see thinking, reflection and output tags.
Do you use llama-server, not "server"?
When i compile it, i get: NOTICE: The 'server' binary is deprecated. Please use 'llama-server' instead.I tested it, with the correct system-prompt and the option --special, i see thinking, reflection and output tags.
I can also see the model making use of the tags, it's just that the final answers are nowhere near close to the top models in accuracy. I'm using b3680.
Ok, i noticed, the server answered the question "How many r in stawberrry? ..." wrong, normal cli-application got it right. But this could happen with different seed. Do you have an example of bad quality output? For german language i think, the model is not as good as Llama-3.1-70b, but it's possible, the Reflection model is based on Llama-3.
You checked settings like top-p (0.95) and temp (0.7)?
They apparently also made another mistake in the weights.. so TBD if any of this testing matters
Hey
@bartowski
.
You don't need to try anything, the model is fake. He lied on X, this model is a copy of LLaMA 3.1 70B which has been confirmed with hashes. That's why it didn't work at all. You can watch this video to find out more about it: https://youtu.be/Xtr_Ll_A9ms?si=u-FDuumCn1Vbe16T
This is a detailed 20 Minute video of how it was verified that it was fraud, and the "owner" of Reflection will most probably get sued if it helps calm down your nerves for him wasting your time.
We hope this helps.