Combine fine tuned model
#4
by
Bloodofthedragon
- opened
I want to fine tune a alpaca30b with your finetuned config on my custom dataset. I tried this:
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map=device_map,
)
model = PeftModel.from_pretrained(model, "chansung/alpaca-lora-30b")
tokenizer = LlamaTokenizer.from_pretrained(base_model)
My scripts is just finetune.py from alpacalora, with that peft addition.
Training runs, but during inference it doesnt seem to have any of your fine tuning present. Can you offer any help. Thanks for your work.
By what were you sure that any of my fine tuning is not present?
When I run inference normally with just your fine tuning the models counts to 10 when prompted. When I run this with my fine tuning it halucinates.