Text Generation
Transformers
Safetensors
mistral
Named Entity Recognition
Relation Extraction
conversational
text-generation-inference
Inference Endpoints
davidreusch commited on
Commit
34e6447
1 Parent(s): a418669

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
  - text2tech/ner_re_1000_texts_GPT3.5labeled_chat_dataset
7
  - text2tech/ner_100abstracts_100full_texts_GPT4labeled_chat_dataset
8
  ---
9
- # Model Card for Model ID
10
 
11
  <!-- Provide a quick summary of what the model is/does. -->
12
 
@@ -74,6 +74,7 @@ Users (both direct and downstream) should be made aware of the risks, biases and
74
 
75
  Use the code below to get started with the model.
76
 
 
77
  from transformers import AutoModelForCausalLM, AutoTokenizer
78
  import datasets
79
 
@@ -95,6 +96,7 @@ response = model.generate(**ex, max_new_tokens=300, temperature=0.0)
95
  # print decoded
96
  input_len = ex['input_ids'].shape[1]
97
  print(tokenizer.decode(response[0][input_len:], skip_special_tokens=True))
 
98
 
99
  [More Information Needed]
100
 
 
6
  - text2tech/ner_re_1000_texts_GPT3.5labeled_chat_dataset
7
  - text2tech/ner_100abstracts_100full_texts_GPT4labeled_chat_dataset
8
  ---
9
+ # Model Card for mistral-7b-instruct-v0.2-NER-RE-qlora-1200docs
10
 
11
  <!-- Provide a quick summary of what the model is/does. -->
12
 
 
74
 
75
  Use the code below to get started with the model.
76
 
77
+ ```python
78
  from transformers import AutoModelForCausalLM, AutoTokenizer
79
  import datasets
80
 
 
96
  # print decoded
97
  input_len = ex['input_ids'].shape[1]
98
  print(tokenizer.decode(response[0][input_len:], skip_special_tokens=True))
99
+ ```
100
 
101
  [More Information Needed]
102