Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RedMist137
/
DPO-Zephyr-7B
like
0
Safetensors
RedMist137/AIHF_DPO_iter0
opt
alignment-handbook
trl
dpo
Generated from Trainer
License:
other
Model card
Files
Files and versions
Community
c049107
DPO-Zephyr-7B
1 contributor
History:
28 commits
RedMist137
Training in progress, step 1700
c049107
verified
21 days ago
.gitattributes
Safe
1.52 kB
initial commit
26 days ago
config.json
Safe
752 Bytes
Training in progress, step 100
21 days ago
merges.txt
Safe
456 kB
Training in progress, step 100
21 days ago
special_tokens_map.json
Safe
548 Bytes
Training in progress, step 100
21 days ago
tokenizer.json
Safe
2.11 MB
Training in progress, step 100
21 days ago
tokenizer.model
Safe
493 kB
LFS
Training in progress, step 100
26 days ago
tokenizer_config.json
Safe
642 Bytes
Training in progress, step 100
21 days ago
training_args.bin
pickle
Detected Pickle imports (13)
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.integrations.deepspeed.HfTrainerDeepSpeedConfig"
,
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"accelerate.utils.dataclasses.DistributedType"
,
"torch.bfloat16"
,
"torch.device"
,
"transformers.integrations.deepspeed.HfDeepSpeedConfig"
,
"accelerate.utils.dataclasses.DeepSpeedPlugin"
,
"alignment.configs.DPOConfig"
,
"accelerate.state.PartialState"
How to fix it?
6.26 kB
LFS
Training in progress, step 1700
21 days ago
vocab.json
Safe
798 kB
Training in progress, step 100
21 days ago