Update README.md
Browse files
README.md
CHANGED
@@ -2,8 +2,6 @@
|
|
2 |
license: llama2
|
3 |
language:
|
4 |
- en
|
5 |
-
tags:
|
6 |
-
- not-for-all-audiences
|
7 |
---
|
8 |
4.25 bpw/bits exl2 quantization of [Venus-120b](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0), using the measurement.json and dataset posted in the model page.
|
9 |
|
@@ -13,8 +11,6 @@ This size lets it being able to use CFG on 72GB VRAM. (use 20,21.5,21.5 for gpu
|
|
13 |
|
14 |
# Venus 120b - version 1.0
|
15 |
|
16 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png)
|
17 |
-
|
18 |
## Overview
|
19 |
|
20 |
The goal was to create a large model that's highly capable for RP/ERP scenarios. Goliath-120b is excellent for roleplay, and Venus-120b was created with the idea of attempting to mix more than two models together to see how well this method works.
|
|
|
2 |
license: llama2
|
3 |
language:
|
4 |
- en
|
|
|
|
|
5 |
---
|
6 |
4.25 bpw/bits exl2 quantization of [Venus-120b](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0), using the measurement.json and dataset posted in the model page.
|
7 |
|
|
|
11 |
|
12 |
# Venus 120b - version 1.0
|
13 |
|
|
|
|
|
14 |
## Overview
|
15 |
|
16 |
The goal was to create a large model that's highly capable for RP/ERP scenarios. Goliath-120b is excellent for roleplay, and Venus-120b was created with the idea of attempting to mix more than two models together to see how well this method works.
|