anishareddyalla commited on
Commit
a804ce9
1 Parent(s): 34e776a

mistral-aws-case-studies-blogs

Browse files
README.md CHANGED
@@ -3,6 +3,7 @@ base_model: mistralai/Mistral-7B-Instruct-v0.1
3
  datasets:
4
  - generator
5
  library_name: peft
 
6
  tags:
7
  - trl
8
  - sft
@@ -19,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.6532
23
 
24
  ## Model description
25
 
@@ -50,11 +51,11 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:------:|:----:|:---------------:|
53
- | 1.9048 | 0.0990 | 20 | 1.8104 |
54
- | 1.7846 | 0.1980 | 40 | 1.7270 |
55
- | 1.6624 | 0.2970 | 60 | 1.6899 |
56
- | 1.6928 | 0.3960 | 80 | 1.6672 |
57
- | 1.6873 | 0.4950 | 100 | 1.6532 |
58
 
59
 
60
  ### Framework versions
 
3
  datasets:
4
  - generator
5
  library_name: peft
6
+ license: apache-2.0
7
  tags:
8
  - trl
9
  - sft
 
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 1.6694
24
 
25
  ## Model description
26
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 1.9139 | 0.0985 | 20 | 1.8390 |
55
+ | 1.7019 | 0.1970 | 40 | 1.7463 |
56
+ | 1.7298 | 0.2956 | 60 | 1.7098 |
57
+ | 1.6856 | 0.3941 | 80 | 1.6848 |
58
+ | 1.6524 | 0.4926 | 100 | 1.6694 |
59
 
60
 
61
  ### Framework versions
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "v_proj",
24
- "q_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "q_proj",
24
+ "v_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e44ce263e6fd885f50d82ca515b9325375b43ee36ededb75acf161ce88bc2e41
3
- size 48
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2206a79359498a34f06c2a08c93bd00f74437c1680930d88102cbdccffd2370
3
+ size 109069176
runs/Jul18_21-56-20_17a788b34d8a/events.out.tfevents.1721339783.17a788b34d8a.6233.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c8ab72370364c71dda30eb23582670892e0beb15ee2ce6cc915d51c61a85807
3
+ size 9385
runs/Jul18_22-17-19_17a788b34d8a/events.out.tfevents.1721341048.17a788b34d8a.10441.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e14e3d136a9e60c76066786e5ddd911bdcf74b987e7ea3fe3f231f566eee8be7
3
+ size 9385
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b141a4d886c46af2e1555636aaf90986ae8a74aae8bb133f1c1ffa40ec308a30
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0c37594192e678f2e87d38203fd26e7ab54803b81c19820790c4e2c12e40718
3
  size 5368