Quantized Fine-tuned Model: sqft-phi-3.5-mini-instruct-wikitext2-awq-64g-ppl10.41

Evaluation

CUDA_VISIBLE_DEVICES=$DEVICES lm_eval --model hf --model_args pretrained=IntelLabs/sqft-phi-3.5-mini-instruct-wikitext2-awq-64g-ppl10.41,max_length=4096 --tasks wikitext --batch_size auto:4 --output_path result.json

License

Apache-2.0

Downloads last month
25
Safetensors
Model size
714M params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.