Base model: https://huggingface.co./WizardLM/WizardLM-13B-V1.2
Model trained on the following data: https://huggingface.co./datasets/gmongaras/reddit_negative
Trained for about 600 steps with a batch size of 6, 3 accumulation steps, and using LoRA adapters on all layers.
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.