How to give feedback to teach/improve the model (and reap the results)?

#220
by abitrolly - opened

The https://huggingface.co./meta-llama/Llama-2-70b-chat-hf model that I've selected for the hugginchat is reporting some results literally.

image.png

Before that I told it that binary data is better represented as a hex dump, but it didn't take notice. So what is the place to report, give feedback about such "mistakes"?

Then of course I am curious how such "mistakes" are fixed. What is the process of teaching the model how to better deal in this situation? Like, for example, if I want it to respond to requests for binary data with annotated hex dumps. I imagine that this is what "prompt engineering" is all about, but having a collection of prompts to copy/paste from in such situations doesn't seem like a convenient workflow to me.

Hugging Chat org

Hey we currently don't support the ability to fine-tune a model for example. Best you can do is indeed prompt engineering, usually adding a few examples in your prompt will help a lot. See few-shot prompting

But tbh I'm not sure if the kind of workflows you're showing in your example are best done with a LLM anyway ๐Ÿค” Models do tend to hallucinate.

I hope this helped a bit!

nsarrazin changed discussion status to closed

@nsarrazin do few-shots consume that 2000 token limit?

Hugging Chat org

Yes, that's one of the downside. Depending on what you want to do, it might not be enough context but for a lot of use cases that's usually enough.

Sign up or log in to comment