inarikami commited on
Commit
6ea4884
·
verified ·
1 Parent(s): 58e13b3

remove table conversion artifacts

Browse files
Files changed (1) hide show
  1. README.md +0 -5
README.md CHANGED
@@ -13,11 +13,6 @@ tags:
13
 
14
  Distillation of DeepSeek-R1 to Qwen 32B, quantized using AWQ to wint4. It fits on any 24GB VRAM GPU or 32GB URAM device!
15
 
16
-
17
- ## Benchmarks:
18
-
19
- Here's how you can convert the given data into a Markdown format, including a short description about the benchmark:
20
-
21
  ## MMLU-PRO
22
 
23
  The MMLU-PRO dataset evaluates subjects across 14 distinct fields using a 5-shot accuracy measurement. Each task assesses models following the methodology of the original MMLU implementation, with each having ten possible choices.
 
13
 
14
  Distillation of DeepSeek-R1 to Qwen 32B, quantized using AWQ to wint4. It fits on any 24GB VRAM GPU or 32GB URAM device!
15
 
 
 
 
 
 
16
  ## MMLU-PRO
17
 
18
  The MMLU-PRO dataset evaluates subjects across 14 distinct fields using a 5-shot accuracy measurement. Each task assesses models following the methodology of the original MMLU implementation, with each having ten possible choices.