Text Generation
Transformers
GGUF
English
llama
Inference Endpoints
Edit model card

QuantFactory Banner

QuantFactory/FW-ProX-1.7B-GGUF

This is quantized version of gair-prox/FW-ProX-1.7B created using llama.cpp

Original Model Card

FW-ProX-1.7B

ArXiv | Models | Data | Code

FW-ProX-1.7B is a small language model. It was and trained on the FineWeb-pro for 50B tokens.

Evaluations

ProX models are evaluated over 10 language model benchmarks in zero-shot setting.

ArC-c ARC-e CSQA HellaS MMLU OBQA PiQA SIQA WinoG SciQ AVG
raw 28.5 52.6 33.9 53.2 29.8 32.6 72.9 40.2 53.0 77.1 47.4
ours 34.4 63.9 32.6 53.0 33.1 34.4 73.1 39.3 52.7 81.5 49.8

Citation

@article{zhou2024programming,
  title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
  author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
  journal={arXiv preprint arXiv:2409.17115},
  year={2024}
}
Downloads last month
337
GGUF
Model size
1.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train QuantFactory/FW-ProX-1.7B-GGUF