Text Generation
Transformers
Safetensors
mistral
Named Entity Recognition
Relation Extraction
conversational
text-generation-inference
Inference Endpoints
davidreusch commited on
Commit
a418669
1 Parent(s): b64da55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -74,6 +74,28 @@ Users (both direct and downstream) should be made aware of the risks, biases and
74
 
75
  Use the code below to get started with the model.
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  [More Information Needed]
78
 
79
  ## Training Details
 
74
 
75
  Use the code below to get started with the model.
76
 
77
+ from transformers import AutoModelForCausalLM, AutoTokenizer
78
+ import datasets
79
+
80
+ # load model and tokenizer
81
+ MODEL = "text2tech/mistral-7b-instruct-v0.2-NER-RE-qlora-1200docs"
82
+ model = AutoModelForCausalLM.from_pretrained(MODEL, device_map="auto")
83
+ tokenizer = AutoTokenizer.from_pretrained(MODEL, padding_side="left", pad_token_id=0)
84
+
85
+ # prepare example data
86
+ data = datasets.load_dataset("text2tech/ner_re_1000_texts_GPT3.5labeled_chat_dataset")
87
+ ex_user_prompt = [data['test']['NER_chats'][0][0]]
88
+ ex = tokenizer.apply_chat_template(ex_user_prompt, add_generation_prompt=True, return_dict=True, return_tensors='pt')
89
+ ex = {k: v.to(model.device) for k, v in ex.items()}
90
+ print(ex_user_prompt[0]['content'])
91
+
92
+ # generate response
93
+ response = model.generate(**ex, max_new_tokens=300, temperature=0.0)
94
+
95
+ # print decoded
96
+ input_len = ex['input_ids'].shape[1]
97
+ print(tokenizer.decode(response[0][input_len:], skip_special_tokens=True))
98
+
99
  [More Information Needed]
100
 
101
  ## Training Details