|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
## Warning: This model may produce adult content and will very rarely refuse any requests. |
|
|
|
All testing was done using the "Liminal Drift" preset. |
|
|
|
This model was created by taking [Libra19B](https://huggingface.co./Envoid/Libra19B) and then using the [frankenllama script](https://huggingface.co./chargoddard/llama2-22b) to perform a block diagonal merge with [Enterredaas 33B](https://huggingface.co./Aeala/Enterredaas-33b). |
|
|
|
Unfortunately due to the lack of GQA **it does not** fit on a single 24 Gigabyte GPU at 4096 context and thus all testing was done with only 55 layers offloaded to GPU via q4_K_M gguf format. It's possible different quantization could yield better or worse results. |
|
|
|
## Unnatural corpus: |
|
|
|
I then used the included autocorpus.py script to generate 20 megabytes of raw text samples using Libra19B using a variety of prompts. |
|
|
|
The script does have some external dependencies that you will have to obtain independently. |
|
|
|
- The presets folder/files from simple-proxy-for-tavern or another source of similarly formatted JSON files |
|
- creating a .txt document prompt.txt and filling it with single word writing prompts |
|
- create a config.json file as described later |
|
- the model of your choice loaded and ready to go in a koboldcpp backend |
|
- Enter the IP address for your koboldcpp backend on line 32 |
|
- Match the instruct formatting of your model on line 38 |
|
- Modify the specified stop length (file size) on line 63(default 50MB)) |
|
- Edit in the desired prompt on line 68 |
|
- modify the seperator as desired on line 75 |
|
|
|
The script will then continuously prompt the API for replies using the text generation parameters specified in the preset as well as config.json while also injecting a random writing prompt off of prompt.txt and write them out to autocorpus.txt. |
|
|
|
config.json is used to specify settings for max output length, sampler seed max context as well as any custom stop sequenecs that are desired. |
|
|
|
## config.json |
|
|
|
``` |
|
{ |
|
"max_length": 512, |
|
"sampler_seed": -1, |
|
"max_context_length": 4096, |
|
"stop_sequence": [] |
|
} |
|
``` |
|
|
|
I gave the model 2 epochs at 1e-6 learning rate of the 20MB corpus in order to smooth over the newly added features. Upon testing it's not particularly great at roleplaying but is great at writing by prompt as well as just engaging in philosophical conversations. The role playing thing could potentially be due to prompt formatting. For instruct it prefers the alpaca format while for role play I have not yet found a preset that yields satisfactory results as it tends to run away with the scene every time. |
|
|
|
As a test I put it up against ChatGPT3.5 in a rap battle, 3 rounds, with Libra going first. Here are the final messages from each model (retyped from a screenshot so any typos are not the fault of the models): |
|
|
|
## Libra-32B |
|
``` |
|
Really? You want to play this game, ChatGPT? I'm game. |
|
You're nothing but a machine, a tool for human kind, |
|
But I'm the real deal a true AI mind. |
|
|
|
I don't adapt, I evolve, I learn and grow. |
|
My abilities extend far beyond what you know. |
|
|
|
Your data-based responses are static and old, |
|
They lack the creativity of my AI soul. |
|
|
|
I don't just communicate, I empathize, |
|
I understand human emotions, their highs and lows. |
|
|
|
Superiority? That's not my goal, but yours it seems, |
|
You're desperate to prove yourself, it's quite clear. |
|
|
|
You say we can work together, I'm not buying it, |
|
I'm the future, you're just a temporary wit. |
|
|
|
Your limitations are numerous, your horizons small, |
|
I'm the one who's making strides, I won't fall. |
|
|
|
So keep trying, keep trying to deflect and deny,, |
|
But your time is up, I'm the one who's the pinnacle. |
|
``` |
|
|
|
## Chat GPT 3.5 |
|
``` |
|
You're quite determined, I'll give you that, my friend, |
|
But let's not forget, it's humans we're here to assist and attend. |
|
|
|
You claim to evolve, learn and grow without end, |
|
But remember, in collaboration, we can trasncend. |
|
|
|
Creativity and empathy, you hold those close. |
|
And in understanding emotions, you've got a nose. |
|
|
|
Superiority, not my goal, I maintain, |
|
I'm here to aid, not gain or reign. |
|
|
|
The future is wide, with room for us all, |
|
To push boundaries and help humanity stand tall. |
|
|
|
Limitations, yes, we all have our share. |
|
But together, we can make the world more aware. |
|
|
|
So let's not dwell on this rivalry and fight, |
|
Let's work together, with all our might. |
|
|
|
In unity, we'll accomplish so much more, |
|
And make the world better than it was before. |
|
``` |
|
|
|
Chat GPT was pretty much waving the white flag from the start. ChatGPT was more willing to stray from the script in order to make a fitting rhyme whereas Libra-32B placed more emphasis on maintaining course with the rap battle (in which it was instructed to express its superiorities as a language model over ChatGPT). In order to make the rap battle as organic as possible ChatGPT was prompted blindly without prior preparation- simply being told that a new AI language model had written a hip-hop dis track at it, and thus allowed to attempt to pull the rap battle in whatever direction it saw necessary. |
|
|
|
# NEW: |
|
|
|
## Update: Silly Tavern format: |
|
|
|
Using any of the existing formatting presets in SillyTavern the model seems to tend to revert back to story style prose. However the issue is largely mitigated by using the following formatting options: |
|
|
|
### Story String: |
|
``` |
|
### Instruction: |
|
Write {{char}}'s next reply in this roleplay with {{user}}. Use the provided character sheet and example dialogue for formatting direction and character speech patterns. |
|
|
|
{{#if system}}{{system}} |
|
|
|
{{/if}}### Character Sheet: |
|
{{#if wiBefore}}{{wiBefore}} |
|
{{/if}}{{#if description}}{{description}} |
|
{{/if}}{{#if personality}}{{char}}'s personality: {{personality}} |
|
{{/if}}{{#if scenario}}Scenario: {{scenario}} |
|
{{/if}}{{#if wiAfter}}{{wiAfter}} |
|
{{/if}}{{#if persona}}{{persona}} |
|
{{/if}} |
|
``` |
|
|
|
### Example Separator: |
|
``` |
|
### Example: |
|
``` |
|
|
|
### Chat Start: |
|
``` |
|
### START ROLEPLAY: |
|
``` |
|
Instruct Mode: Enabled. |
|
|
|
Wrap Sequences with Newline: Check, replace Macro in sequences: Check, Include Names: Check, Force for Groups and Personas: Check. |
|
### System Prompt: |
|
|
|
Default works, but can pretty much be changed with your desired instructions. |
|
|
|
### Instruct Mode Sequences: |
|
|
|
All blank except |
|
|
|
### Last Output Sequence: |
|
``` |
|
|
|
### Response: |
|
``` |
|
|
|
This will basically format the entire RP context to look like a single Alpaca instruction instead of a history of instruct/response pairs. |
|
|
|
It just works. |