File size: 2,733 Bytes
8d52238 aca9176 86d2dc5 aca9176 8d52238 be41e22 84ae28e be41e22 8d52238 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: apache-2.0
language:
- en
base_model:
- ibm-granite/granite-vision-3.2-2b
tags:
- abliterated
- uncensored
library_name: transformers
---
# huihui-ai/granite-vision-3.2-2b-abliterated
This is an uncensored version of [ibm-granite/granite-vision-3.2-2b](https://huggingface.co./ibm-granite/granite-vision-3.2-2b) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
It was only the text part that was processed, not the image part.
## Use with ollama
Convert GGUF, please refer to [README-granitevision](https://github.com/ggml-org/llama.cpp/blob/master/examples/llava/README-granitevision.md)
You can use [huihui_ai/granite3.2-vision-abliterated](https://ollama.com/huihui_ai/granite3.2-vision-abliterated) directly
```
ollama run huihui_ai/granite3.2-vision-abliterated
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_path = "huihui-ai/granite-vision-3.2-2b-abliterated"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForVision2Seq.from_pretrained(model_path).to(device)
# prepare image and text prompt, using the appropriate prompt template
img_path = hf_hub_download(repo_id=model_path, filename='example.png')
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": img_path},
{"type": "text", "text": "What is the highest scoring model on ChartQA and what is its score?"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(device)
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=100)
# print(processor.decode(output[0], skip_special_tokens=True))
cleaned_response = processor.tokenizer.decode(output[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(cleaned_response)
```
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|