onnx model has additional unknown input
I have generated onnx using the following code.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM-1.7B"
device = "cpu" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
torch.onnx.export(
model,
inputs, #values_tuple,
"HuggingFaceTB_SmolLM-1.7B.onnx",
opset_version=17
)
Why there is an additional input ?
FYI - We have exported ONNX versions of the model already, which you can use here: https://huggingface.co./HuggingFaceTB/SmolLM-1.7B/tree/main/onnx
thanks for prompt reply.
I have seen the onnx model https://huggingface.co./HuggingFaceTB/SmolLM2-1.7B-Instruct/blob/main/onnx/model.onnx
I don't see the tokenizer giving attention mask, position ids etc. Could you help me how can I pass a valid input here.