How is the coding performance?

#1
by rombodawg - opened

Does the coding performance compare to gpt-3.5 as in chatgpt? Is it far bellow that? Also can you share what all coding data you trained the model on?

It is lower than gpt-3.5 in terms of encoding. Llama 2 was likely trained on very little encoded data. Also, all data can be found at https://huggingface.co./datasets/openchat/openchat_sharegpt_v3

It is lower than gpt-3.5 in terms of encoding. Llama 2 was likely trained on very little encoded data. Also, all data can be found at https://huggingface.co./datasets/openchat/openchat3_orca_dataset

The link is not working

OpenChat org

Sorry, typo, fixed now

Sign up or log in to comment