license: mit | |
language: | |
- en | |
tags: | |
- IFT | |
# **Introduction** | |
This model originate from "LLaMA 2-7b" we trained only response part with the "Alpaca-GPT-4" dataset, utilizing LoRA (Low-Rank Adaptation) training. The weights from LoRA are merged into the model. | |
## Details | |
### Used Datasets | |
- vicgalle/alpaca-gpt4 | |