Instruction-tuned LLaMA (Alpaca-GPT4)
Fine-tune LLaMA-7B on the alpaca dataset.
The main training scripts are from stanford-alpaca repo, while the data is from GPT-4-LLM repo, with the default training hyper-parameters.
Please refer to this page for more details.
- Downloads last month
- 9
Inference API (serverless) is not available, repository is disabled.