tFINE-850m-24x24-v0.5-instruct-L1
This model is a fine-tuned version of pszemraj/tFINE-850m-24x24-v0.4-flan_aug on the pszemraj/infinity-instruct-7m-T2T_en dataset. It achieves the following results on the evaluation set:
- Loss: 1.1478
- Rouge1: 38.4805
- Rouge2: 22.5971
- Rougel: 31.1093
- Rougelsum: 36.596
- Gen Len: 441.475
- Num Input Tokens Seen: 435513684
usage
from transformers import pipeline
pipe = pipeline(
"text2text-generation",
model="pszemraj/tFINE-850m-24x24-v0.5-instruct-L1",
device_map="auto",
)
prompt = "write a python script to download a file from a url and save as a local file using requests. explain how it works"
res = pipe(
prompt,
max_new_tokens=192,
top_k=4,
penalty_alpha=0.6,
renormalize_logits=True,
no_repeat_ngram_size=5,
)
print(res[0]["generated_text"])
Quick eval
Quick eval for: pszemraj/tFINE-850m-24x24-v0.5-instruct-L1
hf (pretrained=pszemraj/tFINE-850m-24x24-v0.5-instruct-L1,trust_remote_code=True,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
boolq | 2 | none | 0 | acc | ↑ | 0.5661 | ± | 0.0087 |
openbookqa | 1 | none | 0 | acc | ↑ | 0.1540 | ± | 0.0162 |
none | 0 | acc_norm | ↑ | 0.2960 | ± | 0.0204 | ||
piqa | 1 | none | 0 | acc | ↑ | 0.6094 | ± | 0.0114 |
none | 0 | acc_norm | ↑ | 0.5952 | ± | 0.0115 | ||
social_iqa | 0 | none | 0 | acc | ↑ | 0.3900 | ± | 0.0110 |
tinyArc | 0 | none | 25 | acc_norm | ↑ | 0.2903 | ± | N/A |
tinyGSM8k | 0 | flexible-extract | 5 | exact_match | ↑ | 0.0471 | ± | N/A |
strict-match | 5 | exact_match | ↑ | 0.0339 | ± | N/A | ||
tinyHellaswag | 0 | none | 10 | acc_norm | ↑ | 0.2490 | ± | N/A |
tinyMMLU | 0 | none | 0 | acc_norm | ↑ | 0.3021 | ± | N/A |
winogrande | 1 | none | 0 | acc | ↑ | 0.4925 | ± | 0.0141 |
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.