|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
THIS MODEL IS MADE FOR LEWD |
|
|
|
SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED |
|
|
|
MoE of the following models by mergekit: |
|
|
|
* [Undi95/Xwin-MLewd-13B-V0.2](https://huggingface.co./Undi95/Xwin-MLewd-13B-V0.2) |
|
* [Undi95/Utopia-13B](https://huggingface.co./Undi95/Utopia-13B) |
|
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co./KoboldAI/LLaMA2-13B-Psyfighter2) |
|
|
|
MoE setting: |
|
base_model: |
|
|
|
Undi95/Xwin-MLewd-13B-V0.2 |
|
|
|
experts: |
|
- Undi95/Utopia-13B |
|
- KoboldAI/LLaMA2-13B-Psyfighter2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
gpu code example |
|
|
|
``` |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import math |
|
|
|
## v2 models |
|
model_path = "Mixtral_Erotic_13Bx2_MOE_22B" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True |
|
) |
|
print(model) |
|
prompt = input("please input prompt:") |
|
while len(prompt) > 0: |
|
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") |
|
|
|
generation_output = model.generate( |
|
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 |
|
) |
|
print(tokenizer.decode(generation_output[0])) |
|
prompt = input("please input prompt:") |
|
``` |
|
|
|
CPU example |
|
|
|
``` |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import math |
|
|
|
## v2 models |
|
model_path = "Mixtral_Erotic_13Bx2_MOE_22B" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False |
|
) |
|
print(model) |
|
prompt = input("please input prompt:") |
|
while len(prompt) > 0: |
|
input_ids = tokenizer(prompt, return_tensors="pt").input_ids |
|
|
|
generation_output = model.generate( |
|
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 |
|
) |
|
print(tokenizer.decode(generation_output[0])) |
|
prompt = input("please input prompt:") |
|
|
|
``` |