LLM for ARC
Collection
4 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the barc0/transduction_heavy_100k_jsonl and the barc0/transduction_heavy_suggestfunction_100k_jsonl datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.0446 | 1.0 | 1478 | 0.0433 |
0.0229 | 2.0 | 2956 | 0.0323 |
0.014 | 3.0 | 4434 | 0.0319 |
Base model
meta-llama/Llama-3.1-8B