metadata
license: openrail
Model from: https://huggingface.co./TheBloke/wizardLM-7B-HF/tree/main
Trained on: https://huggingface.co./datasets/gmongaras/reddit_political_2019
For about 6000 steps with a batch wise of 8, 2 accumulation steps, and using LoRA adapters on all layers.