Ahanaas commited on
Commit
12f2d3d
·
verified ·
1 Parent(s): 74235c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -3
README.md CHANGED
@@ -1,3 +1,55 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - NousResearch/Hermes-3-Llama-3.1-8B
7
+ ---
8
+ # Inference with Your Model
9
+
10
+ This guide explains how to run inference with your custom model using the Hugging Face `transformers` library.
11
+
12
+ ## Prerequisites
13
+
14
+ Make sure you have the following dependencies installed:
15
+
16
+ - Python 3.7+
17
+ - PyTorch
18
+ - Hugging Face `transformers` library
19
+
20
+ You can install the required packages using pip:
21
+
22
+ ```bash
23
+ pip install torch transformers
24
+ ```
25
+
26
+ ```py
27
+ # Ignore warnings
28
+ logging.set_verbosity(logging.CRITICAL)
29
+
30
+ # Run text generation pipeline with our next model
31
+ system_prompt = """"""
32
+ prompt = ""
33
+
34
+ pipe = pipeline(
35
+ task="text-generation",
36
+ model=model,
37
+ tokenizer=tokenizer,
38
+ max_new_tokens=128, # Increase this to allow for longer outputs
39
+ temperature=0.5, # Encourages more varied outputs
40
+ top_k=50, # Limits to the top 50 tokens
41
+ do_sample=True, # Enables sampling
42
+ return_full_text=True
43
+ )
44
+
45
+ result = pipe(f"<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>")
46
+ # print(result[0]['generated_text'])
47
+ generated_text = result[0]['generated_text']
48
+
49
+ # Remove the leading system prompt and special tokens
50
+ # start_idx = generated_text.find("[/INST]") + len("[/INST]")
51
+ # response_text = generated_text[start_idx:].strip() # Get text after [/INST]
52
+
53
+ # Print the extracted response text
54
+ print(generated_text)
55
+ ```