granite-abliterated
Collection
3 items
•
Updated
•
3
This is an uncensored version of ibm-granite/granite-vision-3.2-2b created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
It was only the text part that was processed, not the image part.
Convert GGUF, please refer to README-granitevision
You can use huihui_ai/granite3.2-vision-abliterated directly
ollama run huihui_ai/granite3.2-vision-abliterated
You can use this model in your applications by loading it with Hugging Face's transformers
library:
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_path = "huihui-ai/granite-vision-3.2-2b-abliterated"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForVision2Seq.from_pretrained(model_path).to(device)
# prepare image and text prompt, using the appropriate prompt template
img_path = hf_hub_download(repo_id=model_path, filename='example.png')
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": img_path},
{"type": "text", "text": "What is the highest scoring model on ChartQA and what is its score?"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(device)
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=100)
# print(processor.decode(output[0], skip_special_tokens=True))
cleaned_response = processor.tokenizer.decode(output[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(cleaned_response)
If you like it, please click 'like' and follow us for more updates.
You can follow x.com/support_huihui to get the latest model information from huihui.ai.
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
Base model
ibm-granite/granite-3.1-2b-base