Update README.md
Browse files
README.md
CHANGED
@@ -99,7 +99,7 @@ Embedding layers and Linear layers of attention module are randomly initialized
|
|
99 |
We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
|
100 |
|
101 |
### Code Finetuning Detail
|
102 |
-
We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for
|
103 |
|
104 |
| Finetuning config | value |
|
105 |
| ---------------------- | ------ |
|
|
|
99 |
We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
|
100 |
|
101 |
### Code Finetuning Detail
|
102 |
+
We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for 1 epoch with batch size of 320.
|
103 |
|
104 |
| Finetuning config | value |
|
105 |
| ---------------------- | ------ |
|