Performance comparison with the original model (Microsoft/phi-4)

#3
by noneUsername - opened

I made a W8A8 format quantization of this model, so I paid attention to the performance of this model in the gsm8k test.
After Microsoft released it, I tested the performance of the official model in the gsm8k test and compared it. The following are the test results.
The conclusion is that abliterated has a very small impact on this model, at least not significant in the gsm8k test.

vllm (pretrained=/root/autodl-tmp/phi-4-abliterated,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.932 ± 0.016
strict-match 5 exact_match 0.932 ± 0.016

vllm (pretrained=/root/autodl-tmp/phi-4-abliterated,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.922 ± 0.012
strict-match 5 exact_match 0.922 ± 0.012

vllm (pretrained=/root/autodl-tmp/phi-4,add_bos_token=true,max_model_len=2048,tensor_parallel_size=2,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.928 ± 0.0164
strict-match 5 exact_match 0.928 ± 0.0164

vllm (pretrained=/root/autodl-tmp/phi-4,add_bos_token=true,max_model_len=2048,tensor_parallel_size=2,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.922 ± 0.012
strict-match 5 exact_match 0.922 ± 0.012

Sign up or log in to comment