|
--- |
|
language: |
|
- ja |
|
tags: |
|
- causal-lm |
|
- not-for-all-audiences |
|
- nsfw |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Berghof NSFW 7B |
|
|
|
<img src="https://huggingface.co./Elizezen/Berghof-vanilla-7B/resolve/main/OIG1%20(2).jpg" alt="drawing" style="width:512px;"/> |
|
|
|
## Model Description |
|
|
|
ε€εγγγδΈηͺεΌ·γγ¨ζγγΎγ |
|
|
|
## Usage |
|
|
|
Ensure you are using Transformers 4.34.0 or newer. |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Elizezen/Berghof-NSFW-7B") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"Elizezen/Berghof-NSFW-7B", |
|
torch_dtype="auto", |
|
) |
|
model.eval() |
|
|
|
if torch.cuda.is_available(): |
|
model = model.to("cuda") |
|
|
|
input_ids = tokenizer.encode( |
|
"εΎθΌ©γ―η«γ§γγγεεγ―γΎγ γͺγ",, |
|
add_special_tokens=True, |
|
return_tensors="pt" |
|
) |
|
|
|
tokens = model.generate( |
|
input_ids.to(device=model.device), |
|
max_new_tokens=512, |
|
temperature=1, |
|
top_p=0.95, |
|
do_sample=True, |
|
) |
|
|
|
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip() |
|
print(out) |
|
``` |
|
|
|
### Intended Use |
|
|
|
The model is mainly intended to be used for generating novels. It may not be so capable with instruction-based responses. |