hello, why not merging model with your adapter model?
your models are not the actual model, they are adapter models, they are only weights of fine tuned part of the actual model. and you cant directly use it because its a part of model. you have to merge main model with your adapter model. i had the same issue too and i can help you merge your adapter weights with main model. you can check my model in my profile, which is a fine tuned mistral-7b-v0.2 which is fine tuned by me with these steps : fine tuned using qlora, merged adapter weights with main model and quantized it in gguf. now it can answer my questions in english and turkish as you can see;
Hello, We have a new service that lets you deploy adapters along with baseModels. We don't merge them as it is done dynamically during runtime. This also helps us to prevent bloating weight file size.
Check out our service here: https://developer.monsterapi.ai/docs/monster-deploy-beta and apply for the beta here, we provide free credits for people to try out and provide us feedback. Let us know if we can help with using this tool.