AINovice2005 commited on
Commit
93dd8c7
1 Parent(s): 4f74973

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -30,7 +30,37 @@ ElEmperador is an ORPO-based finetune derived from the Mistral-7B-v0.1 base mode
30
  ## Evals:
31
  BLEU:0.209
32
 
 
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Results
36
 
 
30
  ## Evals:
31
  BLEU:0.209
32
 
33
+ ## Inference Script:
34
 
35
+ ```python
36
+ def generate_response(model_name, input_text, max_new_tokens=50):
37
+ # Load the tokenizer and model from Hugging Face Hub
38
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
39
+ model = AutoModelForCausalLM.from_pretrained(model_name)
40
+
41
+ # Tokenize the input text
42
+ input_ids = tokenizer(input_text, return_tensors='pt').input_ids
43
+
44
+ # Generate a response using the model
45
+ with torch.no_grad():
46
+ generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens)
47
+
48
+ # Decode the generated tokens into text
49
+ generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
50
+
51
+ return generated_text
52
+
53
+ if __name__ == "__main__":
54
+ # Set the model name from Hugging Face Hub
55
+ model_name = "AINovice2005/ElEmperador"
56
+ input_text = "Hello, how are you?"
57
+
58
+ # Generate and print the model's response
59
+ output = generate_response(model_name, input_text)
60
+
61
+ print(f"Input: {input_text}")
62
+ print(f"Output: {output}")
63
+ ```
64
 
65
  ## Results
66