--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co./docs/hub/model-cards {} --- # Relation Extraction model for KBQA This is the fine-tuned version of TinyLlama for the Relation Extraction task. ## Model Details The model is trained on the [Dataset](https://github.com/dki-lab/GrailQA/tree/main/data), which consists of questions and their related information, such as entities and relations. The relationships in the data are annotated from the Freebase dataset. The relationships in the data are annotated from the Freebase dataset. #### How to use You will need the transformers>=4.34 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. ### Direct Use ```python import os import torch #from datasets import load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging, ) from transformers import AutoTokenizer, pipeline, logging model_name_or_path = "dice-research/Ft_TinnyLlama_QA_RE" model_basename = "model" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( model_name_or_path, quantization_config=None, device_map="auto" ) pipe = pipeline("text-generation", model=model,tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto") def prompt_REQA(question): messages = [ {"role": "user", "content": question}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) #response=pipe(sparql_prompt) response = pipe(prompt,max_new_tokens=20, do_sample=True, temperature=0.6, top_k=5, top_p=0.95)[0]['generated_text'] return response.split('<|assistant|>\n')[1] prompt_REQA("how many electronic arts games are available for sale in the united states of america?") ```