llmat commited on
Commit
868d8a5
1 Parent(s): 9e21a70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -21,3 +21,24 @@ tags:
21
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
24
+
25
+ ## 💻 Usage
26
+
27
+ ```python
28
+ !pip install -qU transformers accelerate
29
+ from transformers import AutoTokenizer
30
+ import transformers
31
+ import torch
32
+ model = "llmat/Mistral-v0.3-7B-ORPO"
33
+ messages = [{"role": "user", "content": "What is a large language model?"}]
34
+ tokenizer = AutoTokenizer.from_pretrained(model)
35
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
36
+ pipeline = transformers.pipeline(
37
+ "text-generation",
38
+ model=model,
39
+ torch_dtype=torch.float16,
40
+ device_map="auto",
41
+ )
42
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
43
+ print(outputs[0]["generated_text"])
44
+ ```