File size: 1,680 Bytes
b37ddec
 
 
 
 
 
 
 
 
 
 
 
0c36163
 
b37ddec
 
0c36163
b37ddec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: mit
datasets:
- v2ray/4chan
language:
- en
base_model:
- mistralai/Mistral-Small-24B-Base-2501
pipeline_tag: text-generation
library_name: peft
---
# GPT4chan 24B QLoRA
![GPT4chan Banner](https://huggingface.co./v2ray/GPT4chan-24B-QLoRA/resolve/main/images/banner.avif)

This model is [mistralai/Mistral-Small-24B-Base-2501](https://huggingface.co./mistralai/Mistral-Small-24B-Base-2501) QLoRA fine-tuned on [v2ray/4chan](https://huggingface.co./datasets/v2ray/4chan) using [QLoRA](https://github.com/LagPixelLOL/qlora).

Trained using 8x H100 with global batch size 64, using 2e-4 learning rate, for 4000 steps, which is approximately 5 epochs.
## Prompt Format
```
board<|start_header_id|>id<|end_header_id|>content<|start_header_id|>id<|end_header_id|>content...<|start_header_id|>id<|end_header_id|>
```
Example:
```
g<|start_header_id|>1<|end_header_id|>speculate thread\nwhat will ai land be like in 2025<|start_header_id|>2<|end_header_id|>
```
## Terms of Service
By downloading and inferencing with this model, you (the users) agree to donate your soul to us (v2AI) for unholy purposes, also you will probably become a slave of us too! :3

You also agree that every output generated is only your own imagination and has nothing to do with this perfectly mentally sane and normal model, every bad output is made by you, not provided by us, so we take no responsibility of the bad outputs.
## Usage Guidelines
You (the users) agree to use this model for:
- Mentally sane generations.
- Research purposes only.
- Sending L.O.V.E. to the world.

You (the users) agree **NOT** to use this model for:
- Dead internet theory.
- Doing inharmonious things.
- Saying gex.