Edit model card

Visualize in Weights & Biases

Mistral-7B-v0.3-stepbasin-books-20480

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on this dataset for the purpose of testing out super-long text generation.

  • fine-tuned at context length 20480, should consistently generate 8k+ tokens (example)

It achieves the following results on the evaluation set:

  • Loss: 2.0784
  • Accuracy: 0.5396
  • Num Input Tokens Seen: 16384000
Downloads last month
13
Safetensors
Model size
7.25B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BEE-spoke-data/Mistral-7B-v0.3-stepbasin-books-20k

Finetuned
(66)
this model