pszemraj commited on
Commit
eb06f7b
1 Parent(s): b0e1999

add readme

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
- license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license:
3
+ - apache-2.0
4
+ - cc-by-nc-4.0
5
+ datasets: pszemraj/fleece2instructions-codealpaca
6
+ tags:
7
+ - generated_from_trainer
8
+ - instruct
9
+ - instructions
10
+ - code
11
+ metrics:
12
+ - rouge
13
+ language:
14
+ - en
15
  ---
16
+
17
+
18
+ # bart-large-code-instructiongen
19
+
20
+ Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!
21
+
22
+ This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the `pszemraj/fleece2instructions-codealpaca dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.9222
25
+ - Rouge1: 62.0692
26
+ - Rouge2: 36.1947
27
+ - Rougel: 57.5128
28
+ - Rougelsum: 58.6613
29
+ - Gen Len: 31.0060
30
+
31
+
32
+ ## Intended uses & limitations
33
+
34
+ 🚨 **note:** as the authors elected to release the [original dataset](https://github.com/sahil280114/codealpaca) under `cc-by-nc`, the license carries over to this model and **cannot be used for commercial activity**.
35
+
36
+ Intended use: Research on domain adaptation and/or other improvements to LLMs by extending instruction:text data pairs.
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 6e-05
48
+ - train_batch_size: 16
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - distributed_type: multi-GPU
52
+ - gradient_accumulation_steps: 2
53
+ - total_train_batch_size: 32
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: cosine
56
+ - lr_scheduler_warmup_ratio: 0.03
57
+ - num_epochs: 3.0
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
62
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
63
+ | 1.0914 | 1.0 | 563 | 1.0303 | 60.288 | 34.1884 | 55.9293 | 57.0714 | 30.6267 |
64
+ | 0.8688 | 2.0 | 1126 | 0.9333 | 61.0409 | 34.9823 | 56.4887 | 57.6662 | 31.7255 |
65
+ | 0.6773 | 3.0 | 1689 | 0.9222 | 62.0692 | 36.1947 | 57.5128 | 58.6613 | 31.0060 |