pszemraj's picture
Update README.md
43ebe76 verified
---
base_model: microsoft/mpnet-base
tags:
- edu score
- data filter
inference: false
datasets:
- HuggingFaceFW/fineweb-edu-llama3-annotations
license: mit
language:
- en
---
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/pszemraj/eduscore-regression/runs/k2lc9nx3)
# mpnet-base-edu-classifier
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co./microsoft/mpnet-base) on the HuggingFaceFW/fineweb-edu-llama3-annotations dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Mse: 0.2105
## Usage
Note this is for CPU, for GPU you will need to make some (small) changes.
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("pszemraj/mpnet-base-edu-classifier")
model = AutoModelForSequenceClassification.from_pretrained("pszemraj/mpnet-base-edu-classifier")
text = "This is a test sentence."
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
# {'text': 'This is a test sentence.', 'score': 0.3350256383419037, 'int_score': 0}
```
## Intended uses & limitations
Refer to the hf classifier's [model card](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier#limitations) for more details
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 90085
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-09
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0