ONNX-converted version of the model

#1

We decided to swap the existing model for the Code Scanner in llm-guard with your model. Our tests show much better accuracy compared to the HuggingFace's one.

To have faster inference, we use ONNX models converted using Optimum from HuggingFace.

Example of the repo with ONNX built-in: https://huggingface.co./laiyer/deberta-v3-base-prompt-injection

pip install transformers optimum[onnxruntime] optimum
model_path = "philomath-1209/programming-language-identification"

from transformers import pipeline, AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = ORTModelForSequenceClassification.from_pretrained(model_path, export=True)

from pathlib import Path
onnx_path = Path("onnx")

model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
philomath-1209 changed pull request status to merged

Sign up or log in to comment