File size: 1,058 Bytes
96fb90d 83f8060 7de03b7 96fb90d 83f8060 96fb90d 83f8060 96fb90d 83f8060 96fb90d 83f8060 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: apache-2.0
base_model: amazingvince/Yoda-WizardLM-2.3-7B
library_name: transformers
model-index:
- name: Yoda-WizardLM-2.3-7B
results: []
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- trl
- orpo
- generated_from_trainer
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# amazingvince/Yoda-WizardLM-2.3-7B AWQ
- Model creator: [amazingvince](https://huggingface.co./amazingvince)
- Original model: [Yoda-WizardLM-2.3-7B](https://huggingface.co./amazingvince/Yoda-WizardLM-2.3-7B)
## Model Summary
This model is a fine-tuned version of [amazingvince/Not-WizardLM-2-7B](https://huggingface.co./amazingvince/Not-WizardLM-2-7B) on an unknown dataset.
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3 |