Edit model card
lm_eval --model vllm --model_args pretrained=/home/mgoin/code/nemotron-3-8b-chat-4k-sft-HF --tasks gsm8k --num_fewshot 5 --batch_size auto

vllm (pretrained=/home/mgoin/code/nemotron-3-8b-chat-4k-sft-HF), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.1031|±  |0.0084|
|     |       |strict-match    |     5|exact_match|↑  |0.1016|±  |0.0083|
Downloads last month
4
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mgoin/nemotron-3-8b-chat-4k-sft-hf

Quantizations
2 models

Collection including mgoin/nemotron-3-8b-chat-4k-sft-hf