LoRA Info:

Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is basically done now. Feel free to try it!~

Important Note: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the Alpaca format, so prompting should be something like:

### Instruction:

<prompt> (without the <>)

### Response:

Current upload: Fully trained adapter model (3 epochs).

Secondary upload: checkpoint of epoch 2.97 (of 3)

Thanks to MetaIX for initial seemingly successful testing of the first uploaded checkpoint (epoch 0.8) as well as epoch 1.

Benchmarks

wikitext2: 4.372413635253906

ptb-new: 24.69171714782715

c4-new: 6.469308853149414

Results generated with GPTQ evals (not quantized) thanks to Neko-Institute-of-Science

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train Aeala/VicUnlocked-alpaca-half-30b-LoRA