amd
/

Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
ZPaimhigh commited on
Commit
9a224a0
1 Parent(s): 81a8ba2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -99,7 +99,7 @@ Embedding layers and Linear layers of attention module are randomly initialized
99
  We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
100
 
101
  ### Code Finetuning Detail
102
- We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for 2 epoch with batch size of 320.
103
 
104
  | Finetuning config | value |
105
  | ---------------------- | ------ |
 
99
  We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
100
 
101
  ### Code Finetuning Detail
102
+ We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for 1 epoch with batch size of 320.
103
 
104
  | Finetuning config | value |
105
  | ---------------------- | ------ |