PEFT
code
instruct
code-llama

hello, why not merging model with your adapter model?

#3
by notbdq - opened

your models are not the actual model, they are adapter models, they are only weights of fine tuned part of the actual model. and you cant directly use it because its a part of model. you have to merge main model with your adapter model. i had the same issue too and i can help you merge your adapter weights with main model. you can check my model in my profile, which is a fine tuned mistral-7b-v0.2 which is fine tuned by me with these steps : fine tuned using qlora, merged adapter weights with main model and quantized it in gguf. now it can answer my questions in english and turkish as you can see;
Screenshot 2024-01-05 at 08.37.59.png

notbdq changed discussion title from hello, why not merging merging model with your adapter model? to hello, why not merging model with your adapter model?
MonsterAPI org

Hello, We have a new service that lets you deploy adapters along with baseModels. We don't merge them as it is done dynamically during runtime. This also helps us to prevent bloating weight file size.

Check out our service here: https://developer.monsterapi.ai/docs/monster-deploy-beta and apply for the beta here, we provide free credits for people to try out and provide us feedback. Let us know if we can help with using this tool.

notbdq changed discussion status to closed

Sign up or log in to comment