winglian commited on
Commit
603fdcc
·
1 Parent(s): 0eda887

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,6 +7,8 @@ base_model: DiscoResearch/mixtral-7b-8expert
7
  model-index:
8
  - name: qlora-out
9
  results: []
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
15
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  # qlora-out
17
 
18
- This model is a fine-tuned version of [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) on the None dataset.
19
 
20
  ## Model description
21
 
@@ -76,4 +78,4 @@ The following `bitsandbytes` quantization config was used during training:
76
  ### Framework versions
77
 
78
 
79
- - PEFT 0.6.0
 
7
  model-index:
8
  - name: qlora-out
9
  results: []
10
+ datasets:
11
+ - tatsu-lab/alpaca
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
18
  # qlora-out
19
 
20
+ This model is a qLoRA fine-tuned version of [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) on the tatsu-lab/alpaca dataset.
21
 
22
  ## Model description
23
 
 
78
  ### Framework versions
79
 
80
 
81
+ - PEFT 0.6.0