Error generating chat: Error: Could not locate file: "https://huggingface.co./Xenova/codegen-350M-mono/resolve/main/onnx/model_quantized.onnx".
#3
by
drqsuperior
- opened
The generation was working now it just gives the error "Error generating chat: Error: Could not locate file: "https://huggingface.co./Xenova/codegen-350M-mono/resolve/main/onnx/model_quantized.onnx"." here is my current code
const generator = await pipeline('text-generation', 'Xenova/codegen-350M-mono');
const messages = [
{
role: 'system',
content: "You are a programming assistant."
},
{ role: 'user', content: prompt }
];
const text = generator.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});
const output = await generator(text, {
max_new_tokens: 128,
do_sample: false,
return_full_text: false,
});
const generatedText = output[0].generated_text;
Hi there! Which version of Transformers.js are you using? You're probably using v3, right? If so, you need to add:
const generator = await pipeline('text-generation', 'Xenova/codegen-350M-mono', { model_file_name: 'decoder_model_merged' );
I will eventually make copies of the correct model file and rename to model.onnx - but this should work for now.
Good news - this is now fixed, and you can use it in the normal way
Xenova
changed discussion status to
closed