--- license: other license_name: nvidia-open-model-license license_link: >- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf tags: - fp8 - vllm base_model: nvidia/Minitron-8B-Base --- # Minitron-8B-Base-FP8 FP8 quantized checkpoint of [nvidia/Minitron-8B-Base](https://huggingface.co./nvidia/Minitron-4B-Base) for use with vLLM. ## Evaluations This quantized model: ``` lm_eval --model vllm --model_args pretrained=Minitron-8B-Base-FP8 --tasks gsm8k --num_fewshot 5 --batch_size auto vllm (pretrained=Minitron-8B-Base-FP8), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5019|± |0.0138| | | |strict-match | 5|exact_match|↑ |0.4989|± |0.0138| ``` Baseline: ``` lm_eval --model vllm --model_args pretrained=nvidia/Minitron-8B-Base --tasks gsm8k --num_fewshot 5 --batch_size auto vllm (pretrained=nvidia/Minitron-8B-Base), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5080|± |0.0138| | | |strict-match | 5|exact_match|↑ |0.5064|± |0.0138| ``` The [original paper](https://arxiv.org/pdf/2407.14679) evals: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/YFmlifuYBVtdfsdPVgV4u.png)