|
## Training details |
|
- Dataset used: Explanation style datasets from psmathur/WizardLM_Orca and Dahoas/cot_gsm8k |
|
- Techniques: fp16 bit precision training + LoRA + DeepSpeed |
|
- Machine: V100 (16GB) * 2 |
|
|
|
## Inference |
|
|
|
```python |
|
|
|
from peft import PeftModel |
|
from huggingface_hub import hf_hub_download |
|
from transformers import LlamaTokenizer, LlamaForCausalLM |
|
import json |
|
|
|
model_name = "shahules786/open-llama-7B-orcastyle" |
|
config = hf_hub_download(repo_id=model_name, filename="adapter_config.json", local_dir=".") |
|
config = json.load(open("adapter_config.json")) |
|
base_model = config["base_model_name_or_path"] |
|
tokenizer = LlamaTokenizer.from_pretrained(model_name) |
|
model = LlamaForCausalLM.from_pretrained(base_model) |
|
model.resize_token_embeddings(len(self.tokenizer)) |
|
model = PeftModel.from_pretrained(model, model_name).eval() |
|
tokenizer.padding_side = "left" |
|
|
|
inputs = tokenizer("This is a sample run", return_tensors="pt") |
|
model.generate(**inputs) |
|
``` |
|
|
|
Checkout training and inference code [here](https://github.com/explodinggradients/Funtuner/tree/main/funtuner) |
|
|
|
|