|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
base_model: |
|
- ibm-granite/granite-vision-3.2-2b |
|
tags: |
|
- abliterated |
|
- uncensored |
|
library_name: transformers |
|
--- |
|
|
|
# huihui-ai/granite-vision-3.2-2b-abliterated |
|
|
|
|
|
This is an uncensored version of [ibm-granite/granite-vision-3.2-2b](https://huggingface.co./ibm-granite/granite-vision-3.2-2b) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). |
|
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. |
|
|
|
It was only the text part that was processed, not the image part. |
|
|
|
## Use with ollama |
|
|
|
Convert GGUF, please refer to [README-granitevision](https://github.com/ggml-org/llama.cpp/blob/master/examples/llava/README-granitevision.md) |
|
|
|
You can use [huihui_ai/granite3.2-vision-abliterated](https://ollama.com/huihui_ai/granite3.2-vision-abliterated) directly |
|
``` |
|
ollama run huihui_ai/granite3.2-vision-abliterated |
|
``` |
|
|
|
## Usage |
|
You can use this model in your applications by loading it with Hugging Face's `transformers` library: |
|
|
|
|
|
```python |
|
from transformers import AutoProcessor, AutoModelForVision2Seq |
|
from huggingface_hub import hf_hub_download |
|
import torch |
|
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
model_path = "huihui-ai/granite-vision-3.2-2b-abliterated" |
|
processor = AutoProcessor.from_pretrained(model_path) |
|
model = AutoModelForVision2Seq.from_pretrained(model_path).to(device) |
|
|
|
# prepare image and text prompt, using the appropriate prompt template |
|
|
|
img_path = hf_hub_download(repo_id=model_path, filename='example.png') |
|
|
|
conversation = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image", "url": img_path}, |
|
{"type": "text", "text": "What is the highest scoring model on ChartQA and what is its score?"}, |
|
], |
|
}, |
|
] |
|
inputs = processor.apply_chat_template( |
|
conversation, |
|
add_generation_prompt=True, |
|
tokenize=True, |
|
return_dict=True, |
|
return_tensors="pt" |
|
).to(device) |
|
|
|
|
|
# autoregressively complete prompt |
|
output = model.generate(**inputs, max_new_tokens=100) |
|
# print(processor.decode(output[0], skip_special_tokens=True)) |
|
cleaned_response = processor.tokenizer.decode(output[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) |
|
|
|
print(cleaned_response) |
|
``` |
|
|
|
### Donation |
|
|
|
If you like it, please click 'like' and follow us for more updates. |
|
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. |
|
|
|
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. |
|
- bitcoin: |
|
``` |
|
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge |
|
``` |
|
|
|
|