Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
tags:
|
5 |
- fusion-bench
|
6 |
base_model: meta-llama/Llama-3.2-1B-Instruct
|
7 |
-
pipeline_tag: text-
|
8 |
library_name: transformers
|
9 |
datasets:
|
10 |
- hendrydong/preference_700K
|
@@ -37,4 +37,4 @@ fusion_bench --config-name llama_full_finetune \
|
|
37 |
modelpool=SeqenceClassificationModelPool/llama_preference700k
|
38 |
```
|
39 |
|
40 |
-
8 GPUs, per-GPU batch size is 8, with gradient accumulation of 16 steps, so the effective batch size is 1024.
|
|
|
4 |
tags:
|
5 |
- fusion-bench
|
6 |
base_model: meta-llama/Llama-3.2-1B-Instruct
|
7 |
+
pipeline_tag: text-classification
|
8 |
library_name: transformers
|
9 |
datasets:
|
10 |
- hendrydong/preference_700K
|
|
|
37 |
modelpool=SeqenceClassificationModelPool/llama_preference700k
|
38 |
```
|
39 |
|
40 |
+
8 GPUs, per-GPU batch size is 8, with gradient accumulation of 16 steps, so the effective batch size is 1024.
|